Saturday, July 2, 2022

Recent Questions - Server Fault

Recent Questions - Server Fault


Change ifcfg-eth* files

Posted: 02 Jul 2022 05:02 PM PDT

I would like to determine the interface name of the hosts where ansible connects and use it as a variable, but the final goal is to create an ifcfg file config for every network interface on every host in the inventory. The problem is that I have hosts with multiple interfaces and mac address, so I've tried this to retrieve the interfaces:

shell: ip -o link | grep [MACADDR] | awk '{print $2}' | awk '{sub(/:/,X,$0);print}'  register: interface    shell: ip -o link | grep [MACADDR] | awk '{print $17}'  register: macaddr  

But it gets more complicated because I have to write the ifcfg file for every interface on every host, like this (2-3 interfaces for every host) and this is what I've tried in my playbook:

- name: Create ifcfg file        blockinfile:          path: /etc/sysconfig/network-scripts/ifcfg-VLAN          create: yes          insertafter: EOF          block: |            DEVICE={{ interface }}            ONBOOT=yes            BOOTPROTO=static            IPADDR=xx.xxx.xxx.xx            NETMASK=255.255.255.0            DEFROUTE=no            HWADDR={{ macaddr }}        when: "{{ macaddr == item.network_1.macaddr }}"        with_items: "{{ networkinfo }}"  

i have a vars_file with this:

networkinfo:      network_1:        macaddr: 00:50:56:a2:22:27      network_2:        macaddr: 00:50:56:a2:58:a5  

But this doesn't work out, someone knows to adjust this mess?

Thanks in advance!

How to initiate manual AWS Route 53 Failover when using Health Checks?

Posted: 02 Jul 2022 03:26 PM PDT

I am using Route 53 with DNS Failover configured based on Health Checks. This is great, but sometimes I want to manually initiate a failover.

How can I manually cause a failover, for example in the event of maintanence? If I rely on the automated failover, it will take a number of failed requests before the failover actually occurs hence why I would like to do manual failover so that there is zero downtime.

Currently we have to manually edit the IP addresses, but just initiating a failover would be much better. Any help here would be greatly appreciated.

How to change this specific SSH message?

Posted: 02 Jul 2022 03:47 PM PDT

So I've been using these servers from privex which are great but on login they have a default message as shown in the photo below. I was curious how to change this? I've been researching it for a little while, and I know about /etc/banner and motd and everything like that, but it's not located at all in any of those regular files. Is this something the provider does on their end that isn't possible to change from the VPS? This is debian 11 btw.

The only part I can edit that IS part of MOTD is the bottom part that says "The programs included with the Debian...." etc. However, I want to edit the "Privex Servers" text.

SSH Login Message screenshot

Error from server: etcdserver: request timed out - error after etcd backup and restore

Posted: 02 Jul 2022 12:55 PM PDT

I have done the etcd backup and then a restore on the same cluster and now I'm having these issues where I can list resources but I can't create or delete. It's a 1 master and 2 workers setup , installed using kubeadm. I was running this cluster for almost 8 months with no issues before. Any advice would be highly appreciated :)

kubectl version  Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.0", GitCommit:"cb303e613a121a29364f75cc67d3d580833a7479", GitTreeState:"clean", BuildDate:"2021-04-08T16:31:21Z", GoVersion:"go1.16.1", Compiler:"gc", Platform:"linux/amd64"}      kubectl get pod  NAME                                                            READY   STATUS    RESTARTS   AGE  mongodb-deployment-79c8fcfd4-krst4                              1/1     Running   1          7d8h  nginx-deployment-5b5b7764d-6x457                                2/2     Running   2          7d5h  nginx-deployment-5b5b7764d-rxfhn                                2/2     Running   2          7d5h  nginx-deployment-5b5b7764d-zw7v8                                2/2     Running   2          7d5h  pod-with-toleration                                             1/1     Running   5          26d      kubectl delete pod pod-with-toleration  Error from server: etcdserver: request timed out    sudo ETCDCTL_API=3 etcdctl member list \  > --cert=/etc/kubernetes/pki/etcd/server.crt \  >   --key=/etc/kubernetes/pki/etcd/server.key \  >   --cacert=/etc/kubernetes/pki/etcd/ca.crt  a26af52927f3b0b7, started, master, https://172.31.4.108:2380, https://172.31.4.108:2379      kubectl get pod -n kube-system  NAME                             READY   STATUS    RESTARTS   AGE  coredns-558bd4d5db-j7fk6         1/1     Running   57         233d  coredns-558bd4d5db-kdkbb         1/1     Running   57         233d  etcd-master                      1/1     Running   57         233d  kube-apiserver-master            1/1     Running   60         216d  kube-controller-manager-master   1/1     Running   58         233d  kube-proxy-2kpwp                 1/1     Running   57         233d  kube-proxy-q54dh                 1/1     Running   49         223d  kube-proxy-xc9rx                 1/1     Running   49         223d  kube-scheduler-master            1/1     Running   58         233d  weave-net-d2tf8                  2/2     Running   123        224d  weave-net-lxt7m                  2/2     Running   108        223d  weave-net-w4mv2                  2/2     Running   103        223d    kubectl logs -n kube-system etcd-master  2022-07-02 19:25:37.798497 W | etcdserver: failed to revoke 30b7819a6ceffaa1 ("etcdserver: request timed out")  2022-07-02 19:25:39.312238 I | etcdserver/api/etcdhttp: /health OK (status code 200)  WARNING: 2022/07/02 19:25:40 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"  2022-07-02 19:25:41.798974 W | etcdserver: failed to revoke 30b7819ebd239fac ("etcdserver: request timed out")  2022-07-02 19:25:43.363679 I | embed: rejected connection from "172.31.24.138:33716" (error "tls: first record does not look like a TLS handshake", ServerName "")  WARNING: 2022/07/02 19:25:44 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"  WARNING: 2022/07/02 19:25:44 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"  2022-07-02 19:25:44.797749 W | etcdserver: failed to revoke 30b7819ebd239292 ("etcdserver: request timed out")  2022-07-02 19:25:44.797781 W | etcdserver: failed to revoke 30b7819a6cefea2b ("etcdserver: request timed out")    sudo vim /etc/kubernetes/manifests/etcd.yaml    apiVersion: v1  kind: Pod  metadata:    annotations:      kubeadm.kubernetes.io/etcd.advertise-client-urls: https://172.31.4.108:2379    creationTimestamp: null    labels:      component: etcd      tier: control-plane    name: etcd    namespace: kube-system  spec:    containers:    - command:      - etcd      - --advertise-client-urls=https://172.31.4.108:2379      - --cert-file=/etc/kubernetes/pki/etcd/server.crt      - --client-cert-auth=true      - --data-dir=/var/lib/etcd      - --initial-advertise-peer-urls=https://172.31.4.108:2380      - --initial-cluster=master=https://172.31.4.108:2380      - --key-file=/etc/kubernetes/pki/etcd/server.key      - --listen-client-urls=https://127.0.0.1:2379,https://172.31.4.108:2379      - --listen-metrics-urls=http://127.0.0.1:2381      - --listen-peer-urls=https://172.31.4.108:2380      - --name=master      - --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt      - --peer-client-cert-auth=true      - --peer-key-file=/etc/kubernetes/pki/etcd/peer.key      - --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt      - --snapshot-count=10000      - --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt      image: k8s.gcr.io/etcd:3.4.13-0      imagePullPolicy: IfNotPresent      livenessProbe:        failureThreshold: 8        httpGet:          host: 127.0.0.1          path: /health          port: 2381          scheme: HTTP        initialDelaySeconds: 10        periodSeconds: 10        timeoutSeconds: 15      name: etcd      resources:        requests:              cpu: 100m          ephemeral-storage: 100Mi          memory: 100Mi      startupProbe:        failureThreshold: 24        httpGet:          host: 127.0.0.1          path: /health          port: 2381          scheme: HTTP        initialDelaySeconds: 10        periodSeconds: 10        timeoutSeconds: 15      volumeMounts:      - mountPath: /var/lib/etcd        name: etcd-data      - mountPath: /etc/kubernetes/pki/etcd        name: etcd-certs    hostNetwork: true    priorityClassName: system-node-critical    volumes:    - hostPath:        path: /etc/kubernetes/pki/etcd        type: DirectoryOrCreate      name: etcd-certs    - hostPath:        path: /var/lib/etcd        type: DirectoryOrCreate      name: etcd-data  status: {}    ls -l /var/lib  total 180  drwxr-xr-x  4 root      root      4096 Oct 21  2021 AccountsService  drwxr-xr-x  2 root      root      4096 Nov  6  2021 PackageKit  drwxr-x---  3 root      root      4096 Nov  6  2021 amazon  drwxr-xr-x  3 root      root      4096 Nov  7  2021 apport  drwxr-xr-x  5 root      root      4096 Jun 26 07:01 apt  drwxr-xr-x  2 root      root      4096 Sep 10  2020 boltd  drwxr-xr-x  8 root      root      4096 Jul  2 17:59 cloud  drwx------  3 root      root      4096 Nov 20  2021 cni  drwxr-xr-x  2 root      root      4096 Jul  2 19:29 command-not-found  drwx--x--x 12 root      root      4096 Mar 28 14:30 containerd  drwxr-xr-x  2 root      root      4096 Nov  6  2021 dbus  drwxr-xr-x  2 root      root      4096 Apr 10  2020 dhcp  drwxr-xr-x  3 root      root      4096 Nov 27  2021 dockershim  drwxr-xr-x  7 root      root      4096 Jun 26 07:01 dpkg  drwx------  3 root      root      4096 Jul  2 18:00 etcd  drwxr-xr-x  7 root      root      4096 Mar 29 04:21 fwupd          

Debian 11: Unable to mount NFS directory at startup running in VMWare Player

Posted: 02 Jul 2022 01:52 PM PDT

I'm working to replace an older Ubuntu 20 VM running on my desktop (VMware Workstation 16 Player) with a new VM running Debian 11. The network adapter is set to "Bridged".

Everything has worked well except one thing - I store home directories on a Synology NAS, which the Linux/BSD hosts mount via NFS4. This works fine if I mount from command:

apt install nfs-common  mount -t nfs 192.168.1.100:/volume1/homes /mnt/homes  

So I've added this to /etc/fstab and rebooted:

192.168.1.100:/volume1/homes /mnt/homes nfs rw 0 0  

However, the directory does not mount at startup. Based on /var/log/daemon.log, it seems to be trying to mount before the network interface is fully operational:

Jul  2 12:42:49 debian systemd[1]: Started RPC bind portmap service.  Jul  2 12:42:49 debian systemd[1]: Reached target Remote File Systems (Pre).  Jul  2 12:42:49 debian systemd[1]: Reached target RPC Port Mapper.  Jul  2 12:42:49 debian systemd[1]: Mounting /mnt/homes...  Jul  2 12:42:49 debian systemd[1]: Started Network Time Synchronization.  Jul  2 12:42:49 debian systemd[1]: Reached target System Time Set.  Jul  2 12:42:49 debian systemd[1]: Reached target System Time Synchronized.  Jul  2 12:42:49 debian mount[392]: mount.nfs: Network is unreachable  Jul  2 12:42:49 debian systemd[1]: mnt-homes.mount: Mount process exited, code=exited, status=32/n/a  Jul  2 12:42:49 debian systemd[1]: mnt-homes.mount: Failed with result 'exit-code'.  Jul  2 12:42:49 debian systemd[1]: Failed to mount /mnt/homes.  Jul  2 12:42:49 debian systemd[1]: Dependency failed for Remote File Systems.  Jul  2 12:42:49 debian systemd[1]: remote-fs.target: Job remote-fs.target/start failed with result 'dependency'.  Jul  2 12:42:49 debian systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.  Jul  2 12:42:49 debian systemd[1]: Starting Load/Save RF Kill Switch Status...  

I tried Ubuntu 22 with the same config and it mounted fine. But I'm trying to use Debian because it tends to be simpler and use less memory.

jenkins.plugins.publish_over.BapPublisherException: Failed to connect and initialize SSH connection

Posted: 02 Jul 2022 12:53 PM PDT

I am stuck to configure publish over ssh on Jenkins, Jenkins container to ansible container in CLI I am able to do SSH vis Private key, but the same private key when I am using on publish over ssh in Jenkins GUI it does not work.
Error: jenkins.plugins.publish_over.BapPublisherException: Failed to connect and initialize SSH connection. Message: [Failed to connect session for config [ansible]. Message [Auth fail]]

Container Error: WARNING j.p.p.BapSshHostConfiguration#connect: Failed to connect session for config [ansible]. Message [Auth fail] com.jcraft.jsch.JSchException: Auth fail

How to shrink qcow2 disk safely?

Posted: 02 Jul 2022 12:59 PM PDT

This is what my guest looked like enter image description here

i shrinked the C partion in the guest in the windows 10 guest , made sure that the unallocated space is on the right most side.

then, i did this sudo qemu-img resize --shrink ../kvm_storage/win10.qcow2 62G and it made the kvm guest unbootable.

Fortunately i had an old snapshot and i was able to restore it. Please let me know how can i reduce the size of kvm disk from 100GB to a smaller size 62 GB safely.

docker nginx does't serve static files through upstream Next.JS

Posted: 02 Jul 2022 12:14 PM PDT

I couldn't find an answer to my question from other similar questions.

So, I have two docker containers:

  • Next.JS web-app
  • nginx reverse proxy

If I'll connect to nginx container via docker exec -it... I can see my static files bound from the shared volume.

I run them with docker-compose:

volumes:    nextjs-build:    version: '3.9'    services:    nginx:      image: arm64v8/nginx:alpine      container_name: nginx      ports:        - "80:80"        - "443:443"      networks:        - blog      restart: unless-stopped      depends_on:        - website-front      volumes:        - type: volume          source: nextjs-build          target: /nextjs          read_only: true        - type: bind          source: /etc/ssl/private/blog-ssl          target: /etc/ssl/private/          read_only: true        - type: bind          source: ./nginx/includes          target: /etc/nginx/includes          read_only: true        - type: bind          source: ./nginx/conf.d          target: /etc/nginx/conf.d          read_only: true        - type: bind          source: ./nginx/dhparam.pem          target: /etc/nginx/dhparam.pem          read_only: true        - type: bind          source: ./nginx/nginx.conf          target: /etc/nginx/nginx.conf          read_only: true      website-front:      build: ./website      container_name: website-front      ports:        - "3000"      networks:        - blog      restart: unless-stopped      volumes:        - nextjs-build:/app/.next    networks:    blog:      external:        name: nat  

my nginx configs:

upstream nextjs_upstream {    server website-front:3000;  }  
server {      listen       443 http2 ssl;      listen       [::]:443 http2 ssl;          server_name  website_url;        ssl_certificate         /etc/ssl/private/chain.crt;      ssl_certificate_key     /etc/ssl/private/server.key;      ssl_trusted_certificate /etc/ssl/private/ca.ca-bundle;        # access_log  /var/log/nginx/host.access.log  main;        # security      include     includes/security.conf;      include     includes/general.conf;        proxy_http_version 1.1;      proxy_set_header Upgrade $http_upgrade;      proxy_set_header Connection 'upgrade';      proxy_set_header Host $host;      proxy_cache_bypass $http_upgrade;        location /_next {          proxy_pass http://nextjs_upstream/.next;      }        location / {          proxy_pass http://nextjs_upstream;      }    }  

Tried multiple nginx configurations for static route:

localhost /_next {     root /nextjs;  }  

NextJS dockerfile:

FROM node:alpine AS deps  # this ensures we fix simlinks for npx, Yarn, and PnPm  RUN apk add --no-cache libc6-compat  RUN corepack disable && corepack enable  WORKDIR /app  COPY package.json yarn.lock ./  RUN yarn install --frozen-lockfile    # builder  FROM node:alpine AS builder  WORKDIR /app  COPY . .  COPY --from=deps /app/node_modules ./node_modules  RUN yarn build    # runner  FROM node:alpine AS runner  WORKDIR /app    ENV NODE_ENV production    COPY --from=builder /app/public ./public  COPY --from=builder /app/.next ./.next  COPY --from=builder /app/node_modules ./node_modules  COPY --from=builder /app/package.json ./package.json  COPY --from=builder /app/yarn.lock ./yarn.lock    RUN chown -R node:node /app/.next    USER node    EXPOSE 3000    CMD [ "yarn", "start" ]  

With that config I can see my website, but for static files I got 404 through upstream.

Why can't k8s find my configmap to mount it as a file?

Posted: 02 Jul 2022 04:51 PM PDT

I have what I believe to be the simplest of k8s deployments using a configmap value as a file and it refuses to work, instead returning this error message:

The Pod "configmap-demo-pod" is invalid: spec.containers[0].volumeMounts[0].name: Not found: "the-thing"  

This is a watered-down version of what I found in the documentation so I am completely at a loss as to what I'm missing.

apiVersion: v1  kind: ConfigMap  metadata:    name: demo  data:    the-thing: |      hello    ---      apiVersion: v1  kind: Pod  metadata:    name: configmap-demo-pod  spec:    containers:      - name: demo        image: alpine        command: ["sleep", "3600"]        volumeMounts:        - mountPath: "/"          name: the-thing    volumes:      - name: config        configMap:          name: demo          items:          - key: the-thing            path: "filename"  

Why lock on a file shared over SMB taking only 3 microseconds?

Posted: 02 Jul 2022 03:49 PM PDT

I used the following C++ functions to lock 1 byte of a file that is shared by server via SMB:

On Windows:

LockFileEx (h, LOCKFILE_FAIL_IMMEDIATELY | LOCKFILE_EXCLUSIVE_LOCK, 0, 1,0, &overlapped)  

On Linux and Mac OS:

fcntl (fd, F_SETLK, &flockdesc)  

On both locally shared files, and for a file shared over SMB, I got a successful lock in just 3 microseconds, whereas, for a file shared over NFS, I got a successful lock on the file in 26 milliseconds.

I am unable to understand how a process could get a lock in 3 microseconds over a network, since, to communicate to a server it should take at least some milliseconds of time, as observed in case of File shared over NFS.

Could anyone help me figure out why this is happening?

Hide Task Scheduler Error message

Posted: 02 Jul 2022 12:42 PM PDT

I have a scheduled task that runs a bat file silently on the local computer. When the file is in place everything works fine, but if it is missing task scheduler will throw up an error that bothers the end user. Is there a way to set the task to display nothing if it can't find the file?

What is the Path API for kubectl top pods

Posted: 02 Jul 2022 03:48 PM PDT

I use this command "kubectl top pods" using Kubectl command

I need path api for this command

I need return data from this command "kubectl top pods" via api

Ansible: Why cannot I get debug output

Posted: 02 Jul 2022 03:48 PM PDT

I have the following playbook, and I'd like to get the debug oupput:

---  - name: Install and configure chrony    hosts: '{{ hostgroup  }}'    gather_facts: no    become: yes    become_method: sudo      tasks:      - name: make sure chronyd is installed        yum:          name: chrony          state: latest          update_cache: yes        register: command_output        - name: Print result of install chronyd        debug:          var: command_output.stdout_lines  

But this what I get:

[WARNING]: log file at /var/log/ansible.log is not writeable and we cannot create it, aborting    /usr/lib/python2.7/site-packages/requests/__init__.py:91: RequestsDependencyWarning: urllib3 (1.24.1) or chardet (2.2.1) doesn't match a supported version!    RequestsDependencyWarning)    PLAY [Install and configure chrony] *********************************************************************************************************************************************************************************************************    TASK [make sure chronyd is installed] *******************************************************************************************************************************************************************************************************  ok: [serv8]    TASK [Print result of install chronyd] ******************************************************************************************************************************************************************************************************  ok: [serv8] => {      "command_output.stdout_lines": "VARIABLE IS NOT DEFINED!"  }    PLAY RECAP **********************************************************************************************************************************************************************************************************************************  serv8               : ok=2    changed=0    unreachable=0    failed=0  

How can I fix the playbook for getting the debug output?

Blocking phpmyadmin from internet, allow only from lan in nginx

Posted: 02 Jul 2022 05:02 PM PDT

I'm running 2 websites on a LEMP stack with nginx configured as a reverse proxy server. I have successfully installed phpmyadmin in the root folder of one of my sites root directories. When I go to www.example.com/phpmyadmin, I am able to access phpmyadmin login page on public internet as well as on my lan. What I would like to do is configure nginx to block any traffic to phpmyadmin that doesn't originate from my local area network. Currently I also have a /admin folder in the root of my site, and I HAVE SUCCESSFULLY set up a way to block all traffic to that folder that doesn't originate from my LAN. I figured blocking phpmyadmin from the outside world would be as easy using the same ngninx virtual configuration lines I used to block the /admin/ directory, but just changing the location to /phpmyadmin. However, when doing this, phpmyadmin is still blocked on the local network.

Below is the relevant parts of my nginx virtual host configuration for example.com. You can see what blocking configurations work and don't work as noted in the comments. Help me fix the #Not working lines. Note: My Server's local ip address is 192.168.1.20.

server {             listen 443 ssl http2;          listen [::]:443 ssl http2;          ssl on;          ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;          ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;          server_name example.com  www.example.com;          root /var/www/example.com;            index index.php;                location / {          # try_files $uri $uri/ =404;          try_files $uri $uri/ /index.php?$args;          }              # Disallow PHP In Upload Folder          location /wp-content/uploads/ {                  location ~ \.php$ {                          deny all;                  }          }          # LAN ONLY ACCESS WORKING          location ^~ /admin {                  allow 192.168.1.0/24;                  deny all;                  include snippets/fastcgi-php.conf;                  fastcgi_pass unix:/var/run/php/php7.3-fpm.sock;                  fastcgi_split_path_info ^(.+\.php)(/.+)$;          }          # LAN ONLY ACCESS NOT WORKING!!!          location ^~ /phpmyadmin {                  allow 192.168.1.0/24;                  deny all;                  include fastcgi.conf;                  fastcgi_intercept_errors on;                  fastcgi_pass local_php;                  fastcgi_buffers 16 16k;                  fastcgi_buffer_size 32k;          }          # LAN ONLY ACCESS WORKING          location ~ \.php$ {                  include fastcgi.conf;                  fastcgi_intercept_errors on;                  fastcgi_pass local_php;                  fastcgi_buffers 16 16k;                  fastcgi_buffer_size 32k;          }  }  

What edits to my virtual host config file must I make to properly restrict phpmyadmin to my LAN in Nginx?

convert nginx reverse proxy config to apache

Posted: 02 Jul 2022 01:02 PM PDT

I have the following working nginx reverse proxy config

server {    listen 192.168.100.7:443;    server_name mysite.internal;    location / {      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;      proxy_set_header Host $http_host;      proxy_set_header X-Forwarded-Proto https;      proxy_redirect off;      proxy_pass http://192.168.100.8;      proxy_http_version 1.1;    }    ssl on;    ssl_certificate /etc/nginx/certs/mycert.cer;    ssl_certificate_key /etc/nginx/certs/mycert.key;    ssl_session_timeout  5m;  }  

and I'm trying to convert it to run under apache httpd but I'm not able to get it working

SSH: Behavior of SetEnv for TERM variable

Posted: 02 Jul 2022 03:17 PM PDT

I want to set the TERM environment variable to a different value for each of my remote machines, so i used SetEnv TERM=myTermForRemoteVar in ~/.ssh/config. The remote machine still sees TERM=myLocalTermVar.

I added AcceptEnv TERM in /etc/ssh/sshd_config on the remote machine. No luck still.

I tried just for testing purposes SetEnv FOO=smth locally and AcceptEnv FOO. This works perfectly and the remote machine sees FOO=smth.

Is TERM treated specially by ssh? SetEnv works in general but not for TERM. Anyone else seeing this behavior? It is not documented at least. Is this a bug?

When should one enable "Do not require Kerberos Preauthentication Kerberos Preauthentication"?

Posted: 02 Jul 2022 04:01 PM PDT

In active directory, there's an option Do not require Kerberos Preauthentication Kerberos Preauthentication.

enter image description here

Does anyone know the use-case when it's checked? I'm wondering when people enables it.

Domain Join - The Network Path was not Found

Posted: 02 Jul 2022 01:02 PM PDT

I am unable to join PC's or servers to my domain and am unable to determine the cause...

What have I tried? - Disabled Domain Windows Firewall; Changed Network Adapter from Public to Domain, Changed DNS directly to the Domain Controller (Same Issues), I have Malwarebytes Premium installed but this has not caused issues in the past, The Client is Windows Server 2016 (Unactivated).

Any advice would be greatly appreciated.

Here are the results from the NetSetup.log on the client...

03/31/2019 22:16:23:240 NetpDoDomainJoin  03/31/2019 22:16:23:240 NetpDoDomainJoin: using current computer names  03/31/2019 22:16:23:240 NetpDoDomainJoin: NetpGetComputerNameEx(NetBios) returned 0x0  03/31/2019 22:16:23:240 NetpDoDomainJoin: NetpGetComputerNameEx(DnsHostName) returned 0x0  03/31/2019 22:16:23:240 NetpMachineValidToJoin: 'GVSNORTSVR'  03/31/2019 22:16:23:240 NetpMachineValidToJoin: status: 0x0  03/31/2019 22:16:23:240 NetpJoinDomain  03/31/2019 22:16:23:240     HostName: GVSNORTSVR  03/31/2019 22:16:23:240     NetbiosName: GVSNORTSVR  03/31/2019 22:16:23:240     Domain: XXX.co.uk  03/31/2019 22:16:23:240     MachineAccountOU: (NULL)  03/31/2019 22:16:23:240     Account: XXX.co.uk\ADM-LC  03/31/2019 22:16:23:240     Options: 0x27  03/31/2019 22:16:23:240 NetpValidateName: checking to see if 'XXX.co.uk' is valid as type 3 name  03/31/2019 22:16:23:240 NetpValidateName: 'XXX.co.uk' is not a valid NetBIOS domain name: 0x7b  03/31/2019 22:16:23:318 NetpCheckDomainNameIsValid [ Exists ] for 'XXX.co.uk' returned 0x0  03/31/2019 22:16:23:318 NetpValidateName: name 'XXX.co.uk' is valid for type 3  03/31/2019 22:16:23:318 NetpDsGetDcName: trying to find DC in domain 'XXX.co.uk', flags: 0x40001010  03/31/2019 22:16:23:774 NetpDsGetDcName: failed to find a DC having account 'GVSNORTSVR$': 0x525, last error is 0x0  03/31/2019 22:16:23:774 NetpDsGetDcName: status of verifying DNS A record name resolution for 'XXX-DC.XXX.co.uk': 0x0  03/31/2019 22:16:23:774 NetpDsGetDcName: found DC '\\XXX-DC.XXX.co.uk' in the specified domain  03/31/2019 22:16:23:774 NetpJoinDomainOnDs: NetpDsGetDcName returned: 0x0  03/31/2019 22:16:23:774 NetpDisableIDNEncoding: using FQDN XXX.co.uk from dcinfo  03/31/2019 22:16:23:774 NetpDisableIDNEncoding: DnsDisableIdnEncoding(UNTILREBOOT) on 'XXX.co.uk' succeeded  03/31/2019 22:16:23:774 NetpJoinDomainOnDs: NetpDisableIDNEncoding returned: 0x0  03/31/2019 22:16:23:774 NetUseAdd to \\XXX-DC.XXX.co.uk\IPC$ returned 53  03/31/2019 22:16:23:774 NetpJoinDomainOnDs: status of connecting to dc '\\XXX-DC.XXX.co.uk': 0x35  03/31/2019 22:16:23:774 NetpJoinDomainOnDs: Function exits with status of: 0x35  03/31/2019 22:16:23:774 NetpResetIDNEncoding: DnsDisableIdnEncoding(RESETALL) on 'XXX.co.uk' returned 0x0  03/31/2019 22:16:23:774 NetpJoinDomainOnDs: NetpResetIDNEncoding on 'XXX.co.uk': 0x0  03/31/2019 22:16:23:774 NetpDoDomainJoin: status: 0x35  

Here are the results from DCDiag...

Directory Server Diagnosis    Performing initial setup:     Trying to find home server...     Home Server = XXX-DC     * Identified AD Forest.     Done gathering initial info.    Doing initial required tests       Testing server: Default-First-Site-Name\XXX-DC        Starting test: Connectivity           ......................... XXX-DC passed test Connectivity    Doing primary tests       Testing server: Default-First-Site-Name\XXX-DC        Starting test: Advertising           ......................... XXX-DC passed test Advertising        Starting test: FrsEvent           ......................... XXX-DC passed test FrsEvent        Starting test: DFSREvent           There are warning or error events within the last 24 hours after the SYSVOL has been shared.  Failing SYSVOL           replication problems may cause Group Policy problems.           ......................... XXX-DC failed test DFSREvent        Starting test: SysVolCheck           ......................... XXX-DC passed test SysVolCheck        Starting test: KccEvent           ......................... XXX-DC passed test KccEvent        Starting test: KnowsOfRoleHolders           ......................... XXX-DC passed test KnowsOfRoleHolders        Starting test: MachineAccount           ......................... XXX-DC passed test MachineAccount        Starting test: NCSecDesc           ......................... XXX-DC passed test NCSecDesc        Starting test: NetLogons           ......................... XXX-DC passed test NetLogons        Starting test: ObjectsReplicated           ......................... XXX-DC passed test ObjectsReplicated        Starting test: Replications           ......................... XXX-DC passed test Replications        Starting test: RidManager           ......................... XXX-DC passed test RidManager        Starting test: Services           ......................... XXX-DC passed test Services        Starting test: SystemLog           ......................... XXX-DC passed test SystemLog        Starting test: VerifyReferences           ......................... XXX-DC passed test VerifyReferences         Running partition tests on : ForestDnsZones        Starting test: CheckSDRefDom           ......................... ForestDnsZones passed test CheckSDRefDom        Starting test: CrossRefValidation           ......................... ForestDnsZones passed test CrossRefValidation       Running partition tests on : DomainDnsZones        Starting test: CheckSDRefDom           ......................... DomainDnsZones passed test CheckSDRefDom        Starting test: CrossRefValidation           ......................... DomainDnsZones passed test CrossRefValidation       Running partition tests on : Schema        Starting test: CheckSDRefDom           ......................... Schema passed test CheckSDRefDom        Starting test: CrossRefValidation           ......................... Schema passed test CrossRefValidation       Running partition tests on : Configuration        Starting test: CheckSDRefDom           ......................... Configuration passed test CheckSDRefDom        Starting test: CrossRefValidation           ......................... Configuration passed test CrossRefValidation       Running partition tests on : XXX        Starting test: CheckSDRefDom           ......................... XXX passed test CheckSDRefDom        Starting test: CrossRefValidation           ......................... XXX passed test CrossRefValidation       Running enterprise tests on : XXX.co.uk        Starting test: LocatorCheck           ......................... XXX.co.uk passed test LocatorCheck        Starting test: Intersite           ......................... XXX.co.uk passed test Intersite  

Windows 10 network mapping using server name in hosts file

Posted: 02 Jul 2022 03:05 PM PDT

I want to map a samba shared folder on my Windows 10 Home PC. The server is a Linux - CentOS 7 with Samba 4.4.4.
If I use the server IP address then it works fine however if I create an entry in the hosts file to name my server then I got path not found error.

First with simple net view this works:

net view \\192.168.0.10  

I added the following to my hosts file:

192.168.0.10 myserver  

But got the following result:

net view \\myserver    System error 53 has occurred.  The network path was not found.  

Pinging the server works fine using myserver

UPDATE

Using the IP I can access the server and the Get-SMBConnection result is:

PS C:\WINDOWS\system32> Get-SMBConnection    ServerName   ShareName UserName               Credential              Dialect NumOpens  ----------   --------- --------               ----------              ------- --------  192.168.0.20 IPC$      DEVELOPER-PC-01\vilma DEVELOPER-PC-01\unixmen 3.1.1   1  

Using the server name I can not even browse the server.

iptables rules for NAT with FTP

Posted: 02 Jul 2022 04:01 PM PDT

I'm trying to create a NAT function in order to achieve 2 tasks at a time.

  1. Users from public network are able to access the FTP server
  2. Users in the LAN are able to use same WAN address 203.X.X.X to access the FTP server
network topology                                 [---] win10 PC     \       /                   [ - ] 10.0.0.4  [wireless router]------------- [ _ ]  WAN:203.x.x.x                   _______   LAN gateway:10.0.0.138         /      / laptop **linux FTP server**                                 /______/  iptables **NAT running here**                                \       \ wlan0:10.0.0.113                                 \_______\    port:20,21                                               passive:6000:7000  

Now the FTP server is only accessible trough LAN ftp://10.0.0.113 I want to forward a port to local FTP server, in this case any user would be able to use WAN address 203.x.x.x to log in FTP server. I use Windows 10 to do the test which is in the same LAN.

*nat  :PREROUTING ACCEPT [280:86644]  :INPUT ACCEPT [79:4030]  :OUTPUT ACCEPT [0:0]  :POSTROUTING ACCEPT [0:0]  -A PREROUTING -j LOG  -A PREROUTING -d 203.213.238.12/32 -p tcp -m tcp --dport 21 -j DNAT --to-destination 10.0.0.113:21  -A PREROUTING -d 203.213.238.12/32 -p tcp -m tcp --dport 20 -j DNAT --to-destination 10.0.0.113  -A PREROUTING -d 203.213.238.12/32 -p tcp -m tcp --dport 6000:7000 -j DNAT --to-destination 10.0.0.113  -A OUTPUT -j LOG  -A OUTPUT -d 203.213.238.12/32 -p tcp -m tcp --dport 21 -j DNAT --to-destination 10.0.0.113:21  -A OUTPUT -d 203.213.238.12/32 -p tcp -m tcp --dport 20 -j DNAT --to-destination 10.0.0.113  -A OUTPUT -d 203.213.238.12/32 -p tcp -m tcp --dport 6000:7000 -j DNAT --to-destination 10.0.0.113  -A POSTROUTING -j LOG  -A POSTROUTING -d 10.0.0.113/32 -o wlan0 -p tcp -m tcp --dport 21 -j SNAT --to-source 10.0.0.138:21  -A POSTROUTING -d 10.0.0.113/32 -o wlan0 -p tcp -m tcp --dport 20 -j SNAT --to-source 10.0.0.138  -A POSTROUTING -d 10.0.0.113/32 -o wlan0 -p tcp -m tcp --dport 6000:7000 -j SNAT --to-source 10.0.0.138  COMMIT  # Completed on Thu Mar  2 19:40:51 2017  # Generated by iptables-save v1.4.21 on Thu Mar  2 19:40:51 2017  *filter  :INPUT ACCEPT [0:0]  :FORWARD ACCEPT [0:0]  :OUTPUT ACCEPT [412:52590]  -A INPUT -i wlan0 -j ACCEPT  -A FORWARD -o wlan0 -j ACCEPT  -A FORWARD -i wlan0 -j ACCEPT  COMMIT  

I'm not sure what I missed or there are some logical mistakes in the configuration. any help would be appropriated.

ZFS cannot import : I/O error

Posted: 02 Jul 2022 12:18 PM PDT

I experienced an error in my ZFS machine. This may have happend after I lost power to my server, but I can not tell for sure. My ZFS Pool Gewuerzglas with RAIDZ-1 is no longer willing to import. I always get the error when trying to import:

cannot import 'Gewuerzglas': I/O error          Destroy and re-create the pool from          a backup source.  

I already tried several things. Non seemed to work. Any suggestens how to rescue this pool? As all the drives are still online it seems to me that the data might be still there but some check sums gone bad?

What I have tried so far

root@openmediavault:~# zpool import     pool: Gewuerzglas       id: 15011586312885837941    state: ONLINE   action: The pool can be imported using its name or numeric identifier.   config:          Gewuerzglas  ONLINE            raidz1-0  ONLINE              sda     ONLINE              sde     ONLINE              sdj     ONLINE              sdc     ONLINE              sdg     ONLINE              sdf     ONLINE              sdi     ONLINE              sdh     ONLINE              sdd     ONLINE  

root@openmediavault:~# zpool import Gewuerzglas  cannot import 'Gewuerzglas': I/O error          Destroy and re-create the pool from          a backup source.    root@openmediavault:~# zpool import -f Gewuerzglas  cannot import 'Gewuerzglas': I/O error          Destroy and re-create the pool from          a backup source.    root@openmediavault:~# zpool import -f -F Gewuerzglas  cannot import 'Gewuerzglas': I/O error          Destroy and re-create the pool from          a backup source.    root@openmediavault:~# zpool import -f -F -X Gewuerzglas  cannot import 'Gewuerzglas': I/O error          Destroy and re-create the pool from          a backup source.    root@openmediavault:~# zpool import -m -f -o readonly=on Gewuerzglas  cannot import 'Gewuerzglas': I/O error          Destroy and re-create the pool from          a backup source.  

root@openmediavault:~# zpool import -f -F -n Gewuerzglas  

Does not show anything

I could also make this screenshot when I connected a monitor to my server

Anyone knows how to repair or rescue data?

System information

Distribution Name       | Openmediavault  Distribution Version    | 3.0.59  Linux Kernel            | Linux 3.16.0-4-amd64  Architecture            | Debian  ZFS Version             | version:        0.6.5.8-2~bpo8+1  SPL Version             | version:        0.6.5.8-2~bpo8+2  

fdisk -l output:

root@openmediavault:~# fdisk -l    Disk /dev/sdb: 55,9 GiB, 60021399040 bytes, 117229295 sectors  Units: sectors of 1 * 512 = 512 bytes  Sector size (logical/physical): 512 bytes / 512 bytes  I/O size (minimum/optimal): 512 bytes / 512 bytes  Disklabel type: dos  Disk identifier: 0x3fb31afd    Device     Boot     Start       End   Sectors   Size Id Type  /dev/sdb1  *           63 112084897 112084835  53,5G 83 Linux  /dev/sdb2       112084898 117212193   5127296   2,5G  5 Extended  /dev/sdb5       112084900 113466426   1381527 674,6M 82 Linux swap / Solaris    Disk /dev/sda: 1,8 TiB, 2000398934016 bytes, 3907029168 sectors  Units: sectors of 1 * 512 = 512 bytes  Sector size (logical/physical): 512 bytes / 512 bytes  I/O size (minimum/optimal): 512 bytes / 512 bytes  Disklabel type: gpt  Disk identifier: B9434DE2-9E3A-504E-B242-B789190FA040    Device          Start        End    Sectors  Size Type  /dev/sda1        2048 3907012607 3907010560  1,8T Solaris /usr & Apple ZFS  /dev/sda9  3907012608 3907028991      16384    8M Solaris reserved 1    Disk /dev/sdc: 1,8 TiB, 2000398934016 bytes, 3907029168 sectors  Units: sectors of 1 * 512 = 512 bytes  Sector size (logical/physical): 512 bytes / 512 bytes  I/O size (minimum/optimal): 512 bytes / 512 bytes  Disklabel type: gpt  Disk identifier: DC494CB2-6208-6B4E-ACB9-190924BCBC70    Device          Start        End    Sectors  Size Type  /dev/sdc1        2048 3907012607 3907010560  1,8T Solaris /usr & Apple ZFS  /dev/sdc9  3907012608 3907028991      16384    8M Solaris reserved 1    Disk /dev/sdd: 1,8 TiB, 2000398934016 bytes, 3907029168 sectors  Units: sectors of 1 * 512 = 512 bytes  Sector size (logical/physical): 512 bytes / 512 bytes  I/O size (minimum/optimal): 512 bytes / 512 bytes  Disklabel type: gpt  Disk identifier: 0A7AAF08-3D15-5747-8188-8121A00A70C9    Device          Start        End    Sectors  Size Type  /dev/sdd1        2048 3907012607 3907010560  1,8T Solaris /usr & Apple ZFS  /dev/sdd9  3907012608 3907028991      16384    8M Solaris reserved 1    Disk /dev/sde: 1,8 TiB, 2000398934016 bytes, 3907029168 sectors  Units: sectors of 1 * 512 = 512 bytes  Sector size (logical/physical): 512 bytes / 512 bytes  I/O size (minimum/optimal): 512 bytes / 512 bytes  Disklabel type: gpt  Disk identifier: DDF59778-5338-C543-B96C-D5D151C953F4    Device          Start        End    Sectors  Size Type  /dev/sde1        2048 3907012607 3907010560  1,8T Solaris /usr & Apple ZFS  /dev/sde9  3907012608 3907028991      16384    8M Solaris reserved 1    Disk /dev/sdg: 1,8 TiB, 2000398934016 bytes, 3907029168 sectors  Units: sectors of 1 * 512 = 512 bytes  Sector size (logical/physical): 512 bytes / 512 bytes  I/O size (minimum/optimal): 512 bytes / 512 bytes  Disklabel type: gpt  Disk identifier: F4B63B28-F391-B14E-B9B9-A6CE0F4467E9    Device          Start        End    Sectors  Size Type  /dev/sdg1        2048 3907012607 3907010560  1,8T Solaris /usr & Apple ZFS  /dev/sdg9  3907012608 3907028991      16384    8M Solaris reserved 1    Disk /dev/sdi: 1,8 TiB, 2000398934016 bytes, 3907029168 sectors  Units: sectors of 1 * 512 = 512 bytes  Sector size (logical/physical): 512 bytes / 512 bytes  I/O size (minimum/optimal): 512 bytes / 512 bytes  Disklabel type: gpt  Disk identifier: E367E0F7-C817-354D-AC4A-B730F9FAAB19    Device          Start        End    Sectors  Size Type  /dev/sdi1        2048 3907012607 3907010560  1,8T Solaris /usr & Apple ZFS  /dev/sdi9  3907012608 3907028991      16384    8M Solaris reserved 1    Disk /dev/sdj: 1,8 TiB, 2000398934016 bytes, 3907029168 sectors  Units: sectors of 1 * 512 = 512 bytes  Sector size (logical/physical): 512 bytes / 512 bytes  I/O size (minimum/optimal): 512 bytes / 512 bytes  Disklabel type: gpt  Disk identifier: 93A033FA-229E-234B-8AF1-C9869F3D05FF    Device          Start        End    Sectors  Size Type  /dev/sdj1        2048 3907012607 3907010560  1,8T Solaris /usr & Apple ZFS  /dev/sdj9  3907012608 3907028991      16384    8M Solaris reserved 1    Disk /dev/sdh: 1,8 TiB, 2000398934016 bytes, 3907029168 sectors  Units: sectors of 1 * 512 = 512 bytes  Sector size (logical/physical): 512 bytes / 512 bytes  I/O size (minimum/optimal): 512 bytes / 512 bytes  Disklabel type: gpt  Disk identifier: 18213F37-6CEC-BF48-AC84-E6434144BAFE    Device          Start        End    Sectors  Size Type  /dev/sdh1        2048 3907012607 3907010560  1,8T Solaris /usr & Apple ZFS  /dev/sdh9  3907012608 3907028991      16384    8M Solaris reserved 1    Disk /dev/sdf: 1,8 TiB, 2000398934016 bytes, 3907029168 sectors  Units: sectors of 1 * 512 = 512 bytes  Sector size (logical/physical): 512 bytes / 512 bytes  I/O size (minimum/optimal): 512 bytes / 512 bytes  Disklabel type: gpt  Disk identifier: 03F63180-AF94-E442-8758-66F78148FA82    Device          Start        End    Sectors  Size Type  /dev/sdf1        2048 3907012607 3907010560  1,8T Solaris /usr & Apple ZFS  /dev/sdf9  3907012608 3907028991      16384    8M Solaris reserved 1  

lsscsi output

root@openmediavault:~# lsscsi  [0:0:0:0]    disk    ATA      HUA722020ALA330  A34H  /dev/sda  [2:0:0:0]    disk    ATA      OCZ-VERTEX2      1.11  /dev/sdb  [6:0:0:0]    disk    ATA      Hitachi HUA72302 A840  /dev/sdc  [6:0:1:0]    disk    ATA      Hitachi HUA72302 A840  /dev/sdd  [6:0:2:0]    disk    ATA      Hitachi HUA72302 A840  /dev/sde  [6:0:3:0]    disk    ATA      SAMSUNG HD204UI  0001  /dev/sdf  [6:0:4:0]    disk    ATA      Hitachi HUA72302 A840  /dev/sdg  [6:0:5:0]    disk    ATA      ST32000542AS     CC34  /dev/sdh  [6:0:6:0]    disk    ATA      Hitachi HUA72302 A840  /dev/sdi  [6:0:7:0]    disk    ATA      Hitachi HUA72302 A840  /dev/sdj  

lsscsi -c output

root@openmediavault:~# lsscsi -c  Attached devices:  Host: scsi0 Channel: 00 Target: 00 Lun: 00    Vendor: ATA      Model: HUA722020ALA330  Rev: A34H    Type:   Direct-Access                    ANSI SCSI revision: 05    Host: scsi2 Channel: 00 Target: 00 Lun: 00    Vendor: ATA      Model: OCZ-VERTEX2      Rev: 1.11    Type:   Direct-Access                    ANSI SCSI revision: 05    Host: scsi6 Channel: 00 Target: 00 Lun: 00    Vendor: ATA      Model: Hitachi HUA72302 Rev: A840    Type:   Direct-Access                    ANSI SCSI revision: 05    Host: scsi6 Channel: 00 Target: 01 Lun: 00    Vendor: ATA      Model: Hitachi HUA72302 Rev: A840    Type:   Direct-Access                    ANSI SCSI revision: 05    Host: scsi6 Channel: 00 Target: 02 Lun: 00    Vendor: ATA      Model: Hitachi HUA72302 Rev: A840    Type:   Direct-Access                    ANSI SCSI revision: 05    Host: scsi6 Channel: 00 Target: 03 Lun: 00    Vendor: ATA      Model: SAMSUNG HD204UI  Rev: 0001    Type:   Direct-Access                    ANSI SCSI revision: 05    Host: scsi6 Channel: 00 Target: 04 Lun: 00    Vendor: ATA      Model: Hitachi HUA72302 Rev: A840    Type:   Direct-Access                    ANSI SCSI revision: 05    Host: scsi6 Channel: 00 Target: 05 Lun: 00    Vendor: ATA      Model: ST32000542AS     Rev: CC34    Type:   Direct-Access                    ANSI SCSI revision: 05    Host: scsi6 Channel: 00 Target: 06 Lun: 00    Vendor: ATA      Model: Hitachi HUA72302 Rev: A840    Type:   Direct-Access                    ANSI SCSI revision: 05    Host: scsi6 Channel: 00 Target: 07 Lun: 00    Vendor: ATA      Model: Hitachi HUA72302 Rev: A840    Type:   Direct-Access                    ANSI SCSI revision: 05  

I somehow managed to import the pool. But unfortunaly it is still not working. I get different status from zpool.

root@openmediavault:~# zpool status    pool: Gewuerzglas   state: UNAVAIL  status: One or more devices could not be used because the label is missing          or invalid.  There are insufficient replicas for the pool to continue          functioning.  action: Destroy and re-create the pool from          a backup source.     see: http://zfsonlinux.org/msg/ZFS-8000-5E    scan: none requested  config:            NAME                     STATE     READ WRITE CKSUM          Gewuerzglas              UNAVAIL      0     0     0  insufficient replic                                                                    as            raidz1-0               UNAVAIL      0     0     0  insufficient replic                                                                    as              2196863600760350153  FAULTED      0     0     0  was /dev/sda1              sde                  ONLINE       0     0     0              sdj                  ONLINE       0     0     0              sdc                  ONLINE       0     0     0              sdg                  ONLINE       0     0     0              sdf                  ONLINE       0     0     0              sdi                  ONLINE       0     0     0              sdh                  ONLINE       0     0     0              8554831367157426907  FAULTED      0     0     0  was /dev/sdd1  

The otherone I become is

root@openmediavault:~# zpool status    pool: Gewuerzglas   state: UNAVAIL  status: One or more devices are faulted in response to persistent errors.  There are insufficient replicas for the pool to          continue functioning.  action: Destroy and re-create the pool from a backup source.  Manually marking the device          repaired using 'zpool clear' may allow some data to be recovered.    scan: none requested  config:            NAME                     STATE     READ WRITE CKSUM          Gewuerzglas              UNAVAIL      0     0     0  insufficient replicas            raidz1-0               UNAVAIL      0     0     0  insufficient replicas              2196863600760350153  FAULTED      0     0     0  was /dev/sda1              sde                  ONLINE       0     0     0              sdj                  ONLINE       0     0     0              sdc                  ONLINE       0     0     0              sdg                  ONLINE       0     0     0              sdf                  ONLINE       0     0     0              sdi                  ONLINE       0     0     0              sdh                  ONLINE       0     0     0              8554831367157426907  FAULTED      0     0     0  was /dev/sdd1    root@openmediavault:~# zpool clear Gewuerzglas  cannot clear errors for Gewuerzglas: one or more devices is currently unavailable    root@openmediavault:~# blkid  /dev/sdb1: UUID="883ce00c-edc5-479f-9fac-8941b0413ef8" TYPE="ext4" PARTUUID="3fb31afd-01"  /dev/sdb5: UUID="f670cf30-a948-47bc-98bf-17f0626bd9de" TYPE="swap" PARTUUID="3fb31afd-05"  /dev/sdc1: LABEL="Gewuerzglas" UUID="15011586312885837941" UUID_SUB="273389127680630550" TYPE="zfs_member" PARTLABEL="zfs" PARTUUID="5503a0c7-981f-e74d-b5c9-4084eb1df641"  /dev/sdd1: LABEL="Gewuerzglas" UUID="15011586312885837941" UUID_SUB="2196863600760350153" TYPE="zfs_member" PARTLABEL="zfs" PARTUUID="97861472-b1ed-954a-831c-119b0d08f45f"  /dev/sde1: LABEL="Gewuerzglas" UUID="15011586312885837941" UUID_SUB="737221115472686340" TYPE="zfs_member" PARTLABEL="zfs" PARTUUID="cf5d27f2-22f6-1842-b728-50707ed22be4"  /dev/sdg1: LABEL="Gewuerzglas" UUID="15011586312885837941" UUID_SUB="15202153139351954887" TYPE="zfs_member" PARTLABEL="zfs" PARTUUID="20815fac-4447-a04f-8b13-25d0d839ca90"  /dev/sdi1: LABEL="Gewuerzglas" UUID="15011586312885837941" UUID_SUB="10997858781946851491" TYPE="zfs_member" PARTLABEL="zfs" PARTUUID="e123a49f-67de-d941-a02e-284ee4dc2670"  /dev/sdh1: LABEL="Gewuerzglas" UUID="15011586312885837941" UUID_SUB="7665980383293405333" TYPE="zfs_member" PARTLABEL="zfs" PARTUUID="0cb1d5df-2c3d-db46-a8c5-da8cdeff94b0"  /dev/sdj1: LABEL="Gewuerzglas" UUID="15011586312885837941" UUID_SUB="12174557459640099897" TYPE="zfs_member" PARTLABEL="zfs" PARTUUID="a949ef49-83d0-884d-828e-8b7cee1c8545"  /dev/sdf1: LABEL="Gewuerzglas" UUID="15011586312885837941" UUID_SUB="5962312056749847786" TYPE="zfs_member" PARTLABEL="zfs" PARTUUID="840ace6e-d262-8e48-a3a5-a50ae7e3472d"  /dev/sdk1: LABEL="Gewuerzglas" UUID="15011586312885837941" UUID_SUB="8554831367157426907" TYPE="zfs_member" PARTLABEL="zfs-a70471c39f62b609" PARTUUID="2ccfee22-e90e-8949-8971-f581e13ff243"  /dev/sdc9: PARTUUID="b7e6e29d-2862-bf4a-8346-ef5566fca891"  /dev/sdd9: PARTUUID="c1ebbcb0-a458-0e47-8f9f-2f9b8bc67165"  /dev/sde9: PARTUUID="2e134ff0-1290-8c40-b50d-10b4fb022c64"  /dev/sdg9: PARTUUID="6cd1a402-0714-5a4a-acf2-9097314401f3"  /dev/sdi9: PARTUUID="739d94f8-8eb2-444c-99ff-65b6fb7e48a7"  /dev/sdh9: PARTUUID="d2f5af40-ba13-7947-a46a-23aa1b00ea8e"  /dev/sdj9: PARTUUID="840de1b5-f24d-9446-9471-bca8d4aff777"  /dev/sdf9: PARTUUID="72b593e7-e390-8540-bba0-15469cd74d96"  /dev/sdk9: PARTUUID="a8ba619d-c59e-be41-a1bc-82c81ffc01b5"  

I don't get why it is telling me that these drives are currently unavailable as they seem not to. I can see them and I just did long smart check with all the drives.

rsync: connection unexpectedly closed - No tty present

Posted: 02 Jul 2022 02:06 PM PDT

I'm having issues trying to rsync files over from a build server to a web server.

the command im running is

rsync -e "ssh -i ${HOME}/.ssh/id_rsa" --rsync-path="sudo rsync" -avh --chown=nobody:webdev --chmod=Dg+s,ug+w --delete --exclude-from=deployment_rsync-excludes.txt ./ deploy-user@PROD01:/${my.application.web.root}/${bamboo.deploy.release}/  

The CI agent throws this error

sudo: no tty present and no askpass program specified    rsync: connection unexpectedly closed (0 bytes received so far) [sender]    rsync error: error in rsync protocol data stream (code 12) at io.c(226) [sender=3.1.0]  

If i tail -f /var/log/auth.log on the target server i get this error

May 26 10:09:45 {some_webserver} sshd[30809]: error: Could not load host key: /etc/ssh/ssh_host_ed25519_key  May 26 10:09:45 {some_webserver} sshd[30809]: Accepted publickey for {deploy-user} from {some_ip} port 36883 ssh2: RSA {some_hash}  May 26 10:09:45 {some_webserver} sshd[30809]: pam_unix(sshd:session): session opened for user {deploy-user} by (uid=0)  May 26 10:09:46 {some_webserver} sshd[30896]: Received disconnect from {some_ip}: 11:  May 26 10:09:46 {some_webserver} sshd[30809]: pam_unix(sshd:session): session closed for user svcacct-deploy  May 26 10:09:46 {some_webserver} sshd[30898]: error: Could not load host key: /etc/ssh/ssh_host_ed25519_key  May 26 10:09:46 {some_webserver} sshd[30898]: Accepted publickey for svcacct-deploy from{some_ip}  port 36888 ssh2: RSA  {some_hash}  May 26 10:09:46 {some_webserver} sshd[30898]: pam_unix(sshd:session): session opened for user {deploy-user} by (uid=0)  May 26 10:09:47 {some_webserver} sudo: pam_unix(sudo:auth): conversation failed  May 26 10:09:47 {some_webserver} sudo: pam_unix(sudo:auth): auth could not identify password for [{deploy-user}]  May 26 10:09:47 {some_webserver} sudo: {deploy-user} : user NOT authorized on host ; TTY=unknown ; PWD=/home/svcacct-deploy ; USER=root ; COMMAND=/usr/bin/rsync --server -vlogDtpre.iLs --delete --usermap=*:nobody --groupmap=*:webdev . ${my.application.web.root}/${bamboo.deploy.release}/  May 26 10:09:47 {some_webserver} sshd[30968]: Received disconnect from 146.215.253.134: 11: disconnected by user  May 26 10:09:47 {some_webserver} sshd[30898]: pam_unix(sshd:session): session closed for user {deploy-user}  

server is running ubuntu 14.04.

Any help resolving this matter would be greatly appreciated

Windows Server Backup is doing incremental instead of full backup of Exchange data

Posted: 02 Jul 2022 02:06 PM PDT

I am backing up an Exchange Server database to a backup volume on Windows Server 2012 R2, using Windows Server Backup.

I mostly followed the tutorial shown at http://exchangeserverpro.com/backup-exchange-server-2013-databases-using-windows-server-backup/

I hope to backup data, and also remove old Exchange log files.

The backup is successful, but the log files are not being removed/truncated.

Exchange does not record a full backup in the database settings page. The "Details" panel for the last backup records the last backup as VSS Full backup, successful, but in the "items" list, both C and D are described as "Backup Type": "Incremental".

I cannot find any further settings to control if backup is "Full" or "Incremental" except on the VSS settings, which is set to Full.

Any suggestions?

Wireshark "length" column - what does it include?

Posted: 02 Jul 2022 03:47 PM PDT

Can anyone tell me what the "Length" column in WireShark refers to?

I'm pretty sure it's the "size" of the entire frame on the wire. I did some calculations, but I didn't get the number that WireShark is reporting.

Does anyone know what the "length" includes? I read somewhere that the preamble (7 octets), start of frame delimiter (1 octet), and FCS (4 octets) aren't normally captured, but does this mean that WireShark still adds these numbers to get to the "length" calculation?

Rsync to windows FTP over curlftpfs fails to set permissions

Posted: 02 Jul 2022 03:05 PM PDT

I'm transferring some large files (1.5Gb) to a windows FTP server in a cronjob. It's going over a 2Mbit ADSL line, with 512Kbit upstream. So it takes forever, and can be prone to dropping to line.

I use a loop around rsync to do this

while ! rsync -q -P --append source ftp_dest; do      sleep 60#give the line a chance to reset (which happens automatically)  done  

This works to transfer the file, even with drops (I check the md5sums afterwards just in case). The problem comes at the very end of the process where rsync tries to set the destination file's permissions, which fails, because the underlying file system is NTFS.

EDIT:

I have added "--no-p --no-o --no-g --no-X --no-A" as per suggestion below, but still get the following error

rsync: ftruncate failed on "/mnt/ftpremote/test.bin": Operation not permitted (1)  

This is not a problem, except that it causes my loop to continue forever. Is there a way to tell rsync to not attempt any permission setting at all? I've looked through the man page and not found anything (even if you tell it not to set permissions, it set's default permissions). I thought to use error codes, but the error code for the failed permissions setting is 23 (according to the man page "Partial transfer due to error"), which looks like it would be the code used if the line dropped anyway.

dpkg: warning: files list file for package 'x' missing

Posted: 02 Jul 2022 03:49 PM PDT

I get this warning for several packages every time I install any package or perform apt-get upgrade. Not sure what is causing it; it's a fresh Debian install on my OpenVZ server and I haven't changed any dpkg settings.

Here's an example:

root@debian:~# apt-get install cowsay  Reading package lists... Done  Building dependency tree         Reading state information... Done  Suggested packages:    filters  The following NEW packages will be installed:    cowsay  0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.  Need to get 21.9 kB of archives.  After this operation, 91.1 kB of additional disk space will be used.  Get:1 http://ftp.us.debian.org/debian/ unstable/main cowsay all 3.03+dfsg1-4 [21.9 kB]  Fetched 21.9 kB in 0s (70.2 kB/s)  Selecting previously unselected package cowsay.  dpkg: warning: files list file for package 'libssh2-1:amd64' missing; assuming package has no files currently installed  dpkg: warning: files list file for package 'libkrb5-3:amd64' missing; assuming package has no files currently installed  dpkg: warning: files list file for package 'libwrap0:amd64' missing; assuming package has no files currently installed  dpkg: warning: files list file for package 'libcap2:amd64' missing; assuming package has no files currently installed  dpkg: warning: files list file for package 'libpam-ck-connector:amd64' missing; assuming package has no files currently installed  dpkg: warning: files list file for package 'libc6:amd64' missing; assuming package has no files currently installed  dpkg: warning: files list file for package 'libtalloc2:amd64' missing; assuming package has no files currently installed  dpkg: warning: files list file for package 'libselinux1:amd64' missing; assuming package has no files currently installed  dpkg: warning: files list file for package 'libp11-kit0:amd64' missing; assuming package has no files currently installed  dpkg: warning: files list file for package 'libavahi-client3:amd64' missing; assuming package has no files currently installed  dpkg: warning: files list file for package 'libbz2-1.0:amd64' missing; assuming package has no files currently installed  dpkg: warning: files list file for package 'libpcre3:amd64' missing; assuming package has no files currently installed  dpkg: warning: files list file for package 'libgpm2:amd64' missing; assuming package has no files currently installed  dpkg: warning: files list file for package 'libgnutls26:amd64' missing; assuming package has no files currently installed  dpkg: warning: files list file for package 'libavahi-common3:amd64' missing; assuming package has no files currently installed  dpkg: warning: files list file for package 'libcroco3:amd64' missing; assuming package has no files currently installed  dpkg: warning: files list file for package 'liblzma5:amd64' missing; assuming package has no files currently installed  dpkg: warning: files list file for package 'libpaper1:amd64' missing; assuming package has no files currently installed  dpkg: warning: files list file for package 'libsensors4:amd64' missing; assuming package has no files currently installed  dpkg: warning: files list file for package 'libbsd0:amd64' missing; assuming package has no files currently installed  dpkg: warning: files list file for package 'libavahi-common-data:amd64' missing; assuming package has no files currently installed  dpkg: warning: files list file for package 'libss2:amd64' missing; assuming package has no files currently installed  dpkg: warning: files list file for package 'libblkid1:amd64' missing; assuming package has no files currently installed  dpkg: warning: files list file for package 'libslang2:amd64' missing; assuming package has no files currently installed  dpkg: warning: files list file for package 'libacl1:amd64' missing; assuming package has no files currently installed  dpkg: warning: files list file for package 'libcomerr2:amd64' missing; assuming package has no files currently installed  dpkg: warning: files list file for package 'libkrb5support0:amd64' missing; assuming package has no files currently installed  dpkg: warning: files list file for package 'e2fslibs:amd64' missing; assuming package has no files currently installed  dpkg: warning: files list file for package 'librtmp0:amd64' missing; assuming package has no files currently installed  dpkg: warning: files list file for package 'libidn11:amd64' missing; assuming package has no files currently installed  dpkg: warning: files list file for package 'libpcap0.8:amd64' missing; assuming package has no files currently installed  dpkg: warning: files list file for package 'libattr1:amd64' missing; assuming package has no files currently installed  dpkg: warning: files list file for package 'libdevmapper1.02.1:amd64' missing; assuming package has no files currently installed  dpkg: warning: files list file for package 'odbcinst1debian2:amd64' missing; assuming package has no files currently installed  dpkg: warning: files list file for package 'libexpat1:amd64' missing; assuming package has no files currently installed  dpkg: warning: files list file for package 'libltdl7:amd64' missing; assuming package has no files currently installed  dpkg: warning: files list file for package 'libkeyutils1:amd64' missing; assuming package has no files currently installed  dpkg: warning: files list file for package 'libcups2:amd64' missing; assuming package has no files currently installed  dpkg: warning: files list file for package 'libsqlite3-0:amd64' missing; assuming package has no files currently installed  dpkg: warning: files list file for package 'libck-connector0:amd64' missing; assuming package has no files currently installed  dpkg: warning: files list file for package 'zlib1g:amd64' missing; assuming package has no files currently installed  dpkg: warning: files list file for package 'libnl1:amd64' missing; assuming package has no files currently installed  dpkg: warning: files list file for package 'libfontconfig1:amd64' missing; assuming package has no files currently installed  dpkg: warning: files list file for package 'libudev0:amd64' missing; assuming package has no files currently installed  dpkg: warning: files list file for package 'libsepol1:amd64' missing; assuming package has no files currently installed  dpkg: warning: files list file for package 'libmagic1:amd64' missing; assuming package has no files currently installed  dpkg: warning: files list file for package 'libk5crypto3:amd64' missing; assuming package has no files currently installed  dpkg: warning: files list file for package 'libunistring0:amd64' missing; assuming package has no files currently installed  dpkg: warning: files list file for package 'libgpg-error0:amd64' missing; assuming package has no files currently installed  dpkg: warning: files list file for package 'libusb-0.1-4:amd64' missing; assuming package has no files currently installed  dpkg: warning: files list file for package 'libpam0g:amd64' missing; assuming package has no files currently installed  dpkg: warning: files list file for package 'libpopt0:amd64' missing; assuming package has no files currently installed  dpkg: warning: files list file for package 'libgssapi-krb5-2:amd64' missing; assuming package has no files currently installed  dpkg: warning: files list file for package 'libgeoip1:amd64' missing; assuming package has no files currently installed  dpkg: warning: files list file for package 'libcurl3-gnutls:amd64' missing; assuming package has no files currently installed  dpkg: warning: files list file for package 'libtasn1-3:amd64' missing; assuming package has no files currently installed  dpkg: warning: files list file for package 'libuuid1:amd64' missing; assuming package has no files currently installed  dpkg: warning: files list file for package 'libgcrypt11:amd64' missing; assuming package has no files currently installed  dpkg: warning: files list file for package 'libgdbm3:amd64' missing; assuming package has no files currently installed  dpkg: warning: files list file for package 'libdbus-1-3:amd64' missing; assuming package has no files currently installed  dpkg: warning: files list file for package 'libsysfs2:amd64' missing; assuming package has no files currently installed  dpkg: warning: files list file for package 'libfreetype6:amd64' missing; assuming package has no files currently installed  (Reading database ... 21908 files and directories currently installed.)  Unpacking cowsay (from .../cowsay_3.03+dfsg1-4_all.deb) ...  Processing triggers for man-db ...  Setting up cowsay (3.03+dfsg1-4) ...  root@debian:~#   

Everything works fine, but these warning messages are pretty annoying. Does anyone know how I can fix this?

ls -la /var/lib/dpkg/info | grep libssh:

-rw-r--r-- 1 root root    327 Sep 21 15:51 libssh2-1.list  -rw-r--r-- 1 root root    359 Aug 15 06:06 libssh2-1.md5sums  -rwxr-xr-x 1 root root    135 Aug 15 06:06 libssh2-1.postinst  -rwxr-xr-x 1 root root    132 Aug 15 06:06 libssh2-1.postrm  -rw-r--r-- 1 root root     20 Aug 15 06:06 libssh2-1.shlibs  -rw-r--r-- 1 root root   4377 Aug 15 06:06 libssh2-1.symbols  

User in domain admin group cannot access directory the group has permission to access

Posted: 02 Jul 2022 02:39 PM PDT

I've run into a rather interesting issue when playing with one of my domain labs.

There's a directory on a 2008 R2 fileserver that's being used for folder redirection for all users in the "Staff" OU. The directory has the following permissions set:

  • FILESERVER\Administrators: Allow full control to the directory, subdirectories, and files
  • DOMAIN\Domain Admins: Allow full control to the directory, subdirectories, and files
  • Authenticated Users: Allow create files, create folders, write attributes, and write extended attributes to the top directory only

In addition, the directory is also a network share with "Allow full control" to the Authenticated Users group.

When user john.doe, a member of the domain admins group, tries to access the directory from the fileserver, he gets the error "You don't currently have permission to access this folder". Trying to access the network share from the same server also results in a permission denied error (although the user can still access his own directory within the share).

Accessing the share from another computer logged on as the same user allows access as configured.

The only way you can access the files in the directory while logged on to the file server is by opening an elevated command prompt. UAC is disabled for all computers in the domain through Group Policy (Run all administrators in Admin Approval mode enabled, and default behavior set to elevate without prompting).

All roads point to the user being allowed access, but it's still being denied. Any ideas?

Cloud Computing - Multiple Physical Computers, One Logical Computer

Posted: 02 Jul 2022 12:49 PM PDT

I know that you can set up multiple virtual machines per physical computer. I'm wondering if it's possible to make multiple physical computers behave as one logical unit?

Fundamentally the way I imagine it working is that you can throw 10 computers into a facility one day. You've got one client that requires the equivalent of two computers worth, and 100 others that eat up the remaining 8. As demands change you're just reallocating logical resources, maybe the 2 computer client now requires a third physical system. You just add it to the cloud, and don't worry about sharding the database, or migrating data over to a new server.

Can it work this way?

If yes, why would anyone ever do things like hand partition their database servers anymore? Just add more computing resources. You scale horizontally with the hardware, but your server appears to scale vertically. There's no need to modify your application's supporting infrastructure to support multiple databases etc.

SVN Checkout URL - fresh install

Posted: 02 Jul 2022 03:47 PM PDT

I just setup SVN on a server that is running Ubuntu server as a fresh install. I've got it up and running but am having difficult determining how to connect to it.

I'm trying to do an import using the local IP address: http://IP/RepositoryName but it's saying it can't resolve the IP. I'm wondering if there's something on the server I need to setup.

I have not modified dav_svn.conf because there is another server here that is running SVN (I'm migrating it to a new server) and it's dav_svn.conf is not modified. The current working SVN has a subdomain associated with the IP location of the server but doesn't do anything special with the ports as far as I can tell.

I'm getting this error via RapidSVN when I try to import...

Error: Error while performing action: OPTIONS of 'http://IP/RepositoryName': could not connect to server (http://IP)  

Any help would be appreciated

Update: I'm now connecting to the server (didn't realize Ubuntu server came naked without apache, and ssh) and getting the response

svn: OPTIONS of 'http://IP/RepositoryName': 200 OK (http://IP)  

when I run a checkout. It sounds to be like there's a disconnect between apache and the SVN service.

Enable PHP extension within .htaccess

Posted: 02 Jul 2022 04:56 PM PDT

Does anyone know how to enable a PHP extension within the .htaccess file?

Possibly:

(.htaccess file)  php_value extension=php_soap.so  

No comments:

Post a Comment