Sunday, September 26, 2021

Recent Questions - Server Fault

Recent Questions - Server Fault


WinSCP claims that Ubuntu server folder is full, but folder is on a 2Tb free system

Posted: 26 Sep 2021 09:25 PM PDT

I have an Ubuntu server, to which I'd like to transfer a Minecraft server. However, in using WinSCP or File Explorer to transfer said server, I get told that I do not have enough space. However, not only is my server containing 2Tb free space, but File Explorer tells me that I lack 190 Mb of space, implying that most of the folder has a set amount of space. How do I resolve this?

How does one make a "fake proxy" in Windows for localhost use?

Posted: 26 Sep 2021 08:27 PM PDT

I'm trying to set up a web browser (Pale Moon) profile which uses a specific network interface/address. (To be able to bypass the VPN connection, since so many sites block all VPNs/proxies.)

I am able to do it with Deluge (BitTorrent client), because all the proxies are blocked from P2P traffic now, but Deluge is one of few applications that allow you to change this. Pale Moon and most other programs do not. They only let you set a proxy. Not change the network interface or "bind address" as it's sometimes called.

How do I, without installing some sketchy software or having to pay money, accomplish a "fake proxy" hosted on localhost which simply routes traffic to it, on my computer only, to my given network interface/address? That is, its only purpose is to bypass my VPN connection for software which doesn't have the feature of allowing the user to change the network interface to use.

I've unsuccessfully tried to solve this issue for many years now.

kex_exchange_identification error with Windows10 OpenSSH server

Posted: 26 Sep 2021 07:53 PM PDT

I installed OpenSSH-server on my Windows 10 PC (probably a "home" version, not a Windows server) using the Microsoft guide. I did not change the C:/Windows/System32/OpenSSH/sshd_config_default file (although I don't think that's relevant here anyway). I can log to the machine from a terminal on that same machine:

localhost SSH

I have another machine running on the same LAN (both wired to the same SoHo router). From it, trying to connect to Windows 10 fails wtih:

kex_exchange_identification: Connection closed by remote host  Connection closed by 10.0.3.130 port 22  

According to this answer to another similar thread, this error happens when the server closes the TCP connection during the cryptographic exchange, or something like that. So I've looked at Windows' firewall, but there is an enabled inbound rule for TCP port 22 (and, besides, if it had beed a missing rule issue, the SSH client would just timeout, not have an error in kex_exchange_identification):

firewall

So I've tried running Wireshark on the server (10.0.3.130). It appears the server accepts the TCP handshake, then the other machine (10.0.3.10) sends some protocol SSH packet, and then the server just closes the connection:

Wireshark OpenSSH started

To see what would happen, I went to Windows' Services app and stopped the OpenSSH SSH Server service, then tried the same thing, but the result with Wireshark is the same:

Wireshark OpenSSH stopped

The one thing I've noticed, and that I don't quite understand, is that running netstat -ab in an administrator PowerShell shows that port 22 has an active listener on it, even when OpenSSH is stopped (just Windows things I guess...):

netstat command

So, yeah... I'm stomped at this point. Any ideas?

MySQL CPU usage difference between my local server and Linode server

Posted: 26 Sep 2021 07:42 PM PDT

I have 2 similar servers, one on my local computer, the other on Linode. OS is Ubuntu 20.04 LTS on both.

When I import a large database into MySQL, my local computer uses less than 10% of CPU while the Linode server uses 100% of CPU. So Linode server completes the import much faster than my local computer.

What configuration makes the difference?

BTRFS on RHEL8 - compiling kernel module or making userspace tools work

Posted: 26 Sep 2021 06:32 PM PDT

Context

I recently installed RHEL 8 without realizing that it no longer supports BTRFS.

Unfortunately, I have 4 disks in BTRFS RAID10. I don't have enough space on my other disks to hold the data present on the BTRFS disks, so copying it all elsewhere while booted from a USB drive is out the window.

I have my initial question, then some followup questions regarding the approaches I took that failed. Feel free to just focus on the "how to get BTRFS working" part of this problem, although I am curious to learn about the other issues if you have answers to any part of this.

Question 1

  1. Is there a (relatively) easy way to get BTRFS working on RHEL8?

Userspace btrfs-progs

My first attempt to make this work was to compile and install btrfs-progs with the following:

# Install deps  sudo dnf install libuuid-devel libblkid-devel lzo-devel zlib-devel libzstd-devel e2fsprogs-devel e2fsprogs-libs e2fsprogs libgcrypt-devel libsodium-devel libattr-devel  # Install deps for doc gen  sudo dnf install asciidoc xmlto source-highlight    # Shallow-clone latest release  git clone --depth 1 --branch v5.14.1 https://github.com/kdave/btrfs-progs.git  cd btrfs-progs  git switch -c v5.14.1    # Build  #   --disable-zoned since that feature needs kernel >=5.10  export CFLAGS="-O3 -pipe -frecord-gcc-switches -mtune=native -march=native"  export CPPFLAGS=$CFLAGS  export SODIUM_CFLAGS=$CFLAGS  export ZSTD_CFLAGS=$CFLAGS  export ZLIB_CFLAGS=$CFLAGS  export UUID_CFLAGS=$CFLAGS  export PYTHON_CFLAGS=$CFLAGS    ./autogen.sh  ./configure --with-crypto=libsodium --disable-zoned  make -j12  sudo make install  

It seems to install correctly and is accessible to my user:

$ which btrfs  /usr/local/bin/btrfs    $ ls -1 /usr/local/bin/ | grep btrfs  btrfs  btrfsck  btrfs-convert  btrfs-find-root  btrfs-image  btrfs-map-logical  btrfs-select-super  btrfstune  fsck.btrfs  mkfs.btrfs    $ btrfs version  btrfs-progs v5.14.1   

However, root apparently doesn't have /usr/local/bin in its path by default. I added export PATH+=":/usr/local/bin" to /etc/profileand/root/.bash_profile, but neither of them seem to get sourced automatically when using sudoor when dropping into a root shell withsudo su`.

When specifying the full path to the binary, it complains it can't open /dev/btrfs-control. Querying my local search engine, some suggest needing udev, but that's already installed (maybe misconfigured?)

$ sudo btrfs version  sudo: btrfs: command not found    $ sudo /usr/local/bin/btrfs device scan  Scanning for Btrfs filesystems  WARNING: failed to open /dev/btrfs-control, skipping device registration: No such file or directory  WARNING: failed to open /dev/btrfs-control, skipping device registration: No such file or directory  WARNING: failed to open /dev/btrfs-control, skipping device registration: No such file or directory  WARNING: failed to open /dev/btrfs-control, skipping device registration: No such file or directory  ERROR: there were 4 errors while registering devices  

Other BTRFS commands seem to work:

$ sudo /usr/local/bin/btrfs filesystem show /dev/sda  Label: 'wdred'  uuid: aaaa-bbbb-cccc-dddd-eeee      Total devices 4 FS bytes used 2.13TiB      devid    1 size 5.46TiB used 1.07TiB path /dev/sda      devid    2 size 5.46TiB used 1.07TiB path /dev/sdc      devid    3 size 5.46TiB used 1.07TiB path /dev/sdb      devid    4 size 5.46TiB used 1.07TiB path /dev/sdd  

However, I've been afraid to mount the partitions or perform any operations on them given the above error, for fear that its missing components that would cause it to mangle my data.

Questions 2, 3, 4

  1. Is it safe to attempt to use btrfs to mount the disks given the above errors?
  2. What's up with the /dev/btrfs-control error on btrfs device scan?
  3. How can I get sudo and sudo su to have /usr/local/bin in its path by default?

BTRFS kernel module

I wondered if it might be better to compile a kernel module, but having almost no kernel hacking experience, it went badly.

It seems like I need to set CONFIG_BTRFS_FS=m in my kernel config to start. It's not there currently, and I seemed to remember being able to do this in menuconfig.

$ grep "BTRFS" /boot/config-4.18.0-305.19.1.el8_4.x86_64   # CONFIG_BTRFS_FS is not set  

The RHEL documentation mentions how to load kernel modules and such, but not about how to build them. I consulted archwiki, and tried to download the RHEL8 kernel from the Red Hat site. The download page for RHEL8 had a "Sources" tab with a 20G .iso file. I downloaded it, mounted it, and found it was stuffed with .rpm files and didn't look anything like the linux kernel source repo. I was a little lost.

I then went to /usr/src/kernels/, initialized a git repo out of fear I would clobber something important, and went on trying to figure out how to build the kernel module or change things in menuconfig.

$ cd /usr/src/kernels/4.18.0-305.19.1.el8_4.x86_64  $ sudo su  # git init  # git add -A  # git commit -m "Unmodified kernel"    # make mrproper    HOSTCC  scripts/basic/bin2c  scripts/kconfig/conf  --syncconfig Kconfig  arch/x86/Makefile:184: *** Compiler lacks asm-goto support..  Stop.  make: *** [Makefile:1361: _clean_arch/x86] Error 2  

Lacking asm-goto support, the internet suggested that I may need elfutils-libelf-devel but I appeared to already have that.

For funzies I tried building it with clang and with gcc-toolset-10, but both of those had the same error.

Question 5

  • Any ideas why Compiler lacks asm-goto support?
  • What's a good resource on how to build kernel modules / patch kernels / modify your system's kernel?

System info

$ uname -a  Linux rhel 4.18.0-305.19.1.el8_4.x86_64 #1 SMP Tue Sep 7 07:07:31 EDT 2021 x86_64 x86_64 x86_64 GNU/Linux    $ gcc --version  gcc (GCC) 8.4.1 20200928 (Red Hat 8.4.1-1)      $ scl run gcc-toolset-10 'gcc --version'  gcc (GCC) 10.2.1 20201112 (Red Hat 10.2.1-8)    $ clang --version  clang version 11.0.0 (Red Hat 11.0.0-1.module+el8.4.0+8598+a071fcd5)  

Thanks for reading this far! Any help is appreciated.

Random file in C drive Windows 10

Posted: 26 Sep 2021 05:49 PM PDT

I recently have noted a random file in my C drive. I've tried opening it in notepad and there is nothing visual on the file. It is the same name as my computer's main user, it cannot be unzipped with 7-zip, and it's attributed as an 'A' file. I've run a scan for malicious files and nothing has been flagged. The file is 0 bytes. Is this some sort of cache file or something malicious I should delete? It's concerning me, and if anyone has knowledge on this please let me know.

DNS setup for no subdomain

Posted: 26 Sep 2021 06:56 PM PDT

I can't figure out why going to my website, mywebsite.app does not work, while www.mywebsite.app and anythinghere.mywebsite.app works.

Am I missing something in this DNS list?

All these CNAMEs area working

  1. * eg whocares.mywebsite.app points to my main S3 bucket and works
  2. admin eg admin.mywebsite.app points to my secondary S3 bucket and works
  3. api eg api.mywebsite.app points to my heroku app and works
  4. www eg www.mywebsite.app points to my main S3 bucket and works
  5. Those long complicated ones are for AWS certificate management

What am I missing to get mywebsite.app ie without any subdomain doing the same thing as the * rule? Why does the * not include "nothing"? enter image description here

Windows NLB: Stability of source IP -> target machine mapping

Posted: 26 Sep 2021 05:57 PM PDT

We are using Windows NLB in our production environment to load balance https requests to ten IIS servers.

The settings are:

Setting Value
Port 443
Protocol TCP
Filtering Mode Multiple Hosts
Affinity Single
Timeout (in minutes) Disabled

So as long as there is no change in the NLB cluster membership, i.e. no hosts are leaving/joining the cluster, the same source ip will always be mapped to the same target IIS server (according to the docs for Server 2003 R2 - I can't find newer official docs that detail affinity):

Now what if I take down some or all machines, and bring all of them up again, so the cluster membership is the same as it was before taking down machines. Will the same source ip addresses again map to the same target servers as before, or does the mapping depend on things like cluster join order or something non-deterministic?


N.B. I just found that during convergence, the mapping may change:

In normal operations, if single affinity is used, NLB will guarantee that all connections coming from the same source IP will hit the same server. This would include multiple TCP connections of the same SSL session. However, configuration changes might cause NLB to accept different connections of the same SSL session by different servers during the convergence. As a result, the SSL session is broken.

Which is fair enough (and can be mitigated by enabling "extended affinity" using the "timeout" value, as the linked article explains). However I'm interested in the state after convergence.

OpenVPN clients still getting assigned same ip with duplicate-cn in server config

Posted: 26 Sep 2021 04:42 PM PDT

Clients are still getting assigned same ip as each other even with duplicate-cn in server the config. I can't figure out what I'm doing wrong. I've restarted all the machines involved after changing the config. Here is my server.conf.

port 1194  proto tcp6  dev tun  user nobody  group nogroup  persist-key  persist-tun  keepalive 10 120  topology subnet  server 10.8.0.0 255.255.255.0  ifconfig-pool-persist ipp.txt  push "dhcp-option DNS 8.8.8.8"  push "dhcp-option DNS 1.1.1.1"  push "redirect-gateway def1 bypass-dhcp"  server-ipv6 fd42:42:42:42::/112  tun-ipv6  push tun-ipv6  push "route-ipv6 2000::/3"  push "redirect-gateway ipv6"  dh none  ecdh-curve prime256v1  tls-crypt tls-crypt.key  crl-verify crl.pem  ca ca.crt  cert server_PxV2VymBrucFUPiE.crt  key server_PxV2VymBrucFUPiE.key  auth SHA256  cipher AES-128-GCM  ncp-ciphers AES-128-GCM  tls-server  tls-version-min 1.2  tls-cipher TLS-ECDHE-ECDSA-WITH-AES-128-GCM-SHA256  client-config-dir /etc/openvpn/ccd  status /var/log/openvpn/status.log  duplicate-cn  client-to-client  verb 3  

FreeIPA: External DNS requests (google etc.) fail for clients on new subnet

Posted: 26 Sep 2021 05:53 PM PDT

I'm trying to rebuild my home network to make use of FreeIPA to manage some Linux clients. This has all gone well on my main network (192.168.222.0/24) with all clients being able to resolve both internal DNS and external requests for google etc. All clients on that network can SSH (with sudo) using a user I created in FreeIPA.

The issue comes when I try to connect my Wifi network (192.168.30.0/24) to the FreeIPA server. Clients on the Wifi network can only resolve internal DNS. Requests for google.com etc. are ignored. This works fine on my main network.

So from a host on my main 192.168.222.0/24 network:

[root@kvm ~]# dig @auth.brocas.home monitoring.brocas.home +short  192.168.222.130  [root@kvm ~]# dig @auth.brocas.home  google.com +short  172.217.169.78  

But on my 192.168.30.0/24 network, no external DNS requests are resolved:

[manjaro-i3 ~]# dig @auth.brocas.home monitoring.brocas.home  +short  192.168.222.130  [manjaro-i3 ~]# dig @auth.brocas.home  google.com +short  [manjaro-i3 ~]#   

Does anyone know why this might be?

Thanks in advance.

Fast intercontinental Remote Desktop Sessions

Posted: 26 Sep 2021 07:18 PM PDT

I will be working from Thailand for the next months and I want to be able to RDP or Anydesk my office PC in Greece. Previous attempts have been proven unstable with varied results throughout the day - from flawless, to terrible connections. I am reading that ping time is the most crucial element in such connections. I have tried subscribing to a VPN service but the results are at best the same as not using VPN at all.

I am interested to find out if there is tried and tested way to approach this issue. I am not aware on how intercontinental routing is performed and if one can direct packets through an optimal route. I think this is outside the end-user's control but I would be interested to know if there are companies and subscription options that accomplish intercontinental routes with low latency.

AWS/S3/Cloudfront/GoDadday and subdomains

Posted: 26 Sep 2021 05:13 PM PDT

I am wondering and hoping this is possible.

On AWS I have two s3 buckets to separate a client facing application and an administrative application

  1. mysite-app
  2. mysite-admin

I have a cloudfront distribution for each application.

I have one AWS managed certificate for the following

  1. *.mysite.app
  2. mysite.app

So far everything works for mysite-app, and I am just adding the admin portion now.

To the question: will it be possible to have everything *.mysite.app go to my mysite-app except admin.mysite.app goes to my mysite-admin distro?

Would I need separate certificates

  1. admin.mysite.app
  2. *.mysite.app/mysite.app

Would GoDaddy be able to support this?

HP P420 Bandwidth Limitation

Posted: 26 Sep 2021 07:32 PM PDT

I have an HP DL360p G8 with P420g with 1G Cache and 4x Seagate Constellation 3TB as Raid 5 and 10Gbps UPLINK to my datacenter, As I know p420 is 6Gbps so its for the whole of the interfaces? or its 6Gbps Per Interface/hard disks?

How can I setup two or multiple reverse zone for one IP with bind9?

Posted: 26 Sep 2021 04:59 PM PDT

When I try to define reverse zone with same IP the as reverse zone named-checkconf throws:

$ named-checkconf named.conf.cust_zone  named.conf:63: zone '70.231.168.192.in-addr.arpa': already exists previous definition: named.conf:52                                                                                                               

My config file.

...  // zone for the 1st domain    zone "domain_1.com"{      type master;          file "file"  }    // reverse zone for 1st domain     zone "70.231.168.192.in-addr.arpa" {      type master;      file "file";                                                                                                                                                                   }    // zone for the 2nd domain    zone "domain_2.com"{      type master;          file "file"  }    // reverse zone for 2nd domain     zone "70.231.168.192.in-addr.arpa" {      type master;      file "file";  }  ...  

It works when I change one of the reverse zone name. But I am not sure would it work ?

So, is there are any way to configure two reverse zone for one IP ?

Kubernetes trouble - /var/lib/calico/nodename: no such file or directory

Posted: 26 Sep 2021 07:17 PM PDT

I'm following guide from Linux Foundation "Kubernetes Administrator" course and stuck on deploying simple app. I think trouble is even earlier than with app deployment.

I've created master and worker, seems that they are ok:

$ kubectl get nodes  NAME                       STATUS   ROLES    AGE   VERSION  ubuntu-training-server-1   Ready    master   63m   v1.19.1  ubuntu-training-server-2   Ready    <none>   57m   v1.19.1  

But here is something wrong:

$ kubectl get pods --all-namespaces  NAMESPACE     NAME                                               READY   STATUS              RESTARTS   AGE  default       nginx-6799fc88d8-556z4                             0/1     ContainerCreating   0          50m  kube-system   calico-kube-controllers-69496d8b75-thcl8           1/1     Running             1          63m  kube-system   calico-node-gl885                                  0/1     CrashLoopBackOff    20         58m  kube-system   calico-node-jvc59                                  1/1     Running             1          63m  kube-system   coredns-f9fd979d6-hjfst                            1/1     Running             1          64m  kube-system   coredns-f9fd979d6-kvx42                            1/1     Running             1          64m  kube-system   etcd-ubuntu-training-server-1                      1/1     Running             1          64m  kube-system   kube-apiserver-ubuntu-training-server-1            1/1     Running             1          64m  kube-system   kube-controller-manager-ubuntu-training-server-1   1/1     Running             1          64m  kube-system   kube-proxy-9899t                                   1/1     Running             1          58m  kube-system   kube-proxy-z6b22                                   1/1     Running             1          64m  kube-system   kube-scheduler-ubuntu-training-server-1            1/1     Running             1          64m  

I mean not all are ready.

If I try to get details about trouble node, I see:

$ kubectl logs -n kube-system calico-node-gl885  Error from server (NotFound): the server could not find the requested resource ( pods/log calico-node-gl885)  

And when I try to deploy nginx I get:

$ kubectl create deployment nginx --image=nginx  deployment.apps/nginx created  

and

$ kubectl get deployments  NAME    READY   UP-TO-DATE   AVAILABLE   AGE  nginx   0/1     1            0           52m  

Here are troubles too:

$ kubectl get events  ...  92s         Warning   FailedCreatePodSandBox    pod/nginx-6799fc88d8-556z4      Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "4c451bc2c92f555c930f84e4e8b7082a03dd2824cf50948d348893ebea488d93" network for pod "nginx-6799fc88d8-556z4": networkPlugin cni failed to set up pod "nginx-6799fc88d8-556z4_default" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/  ...  

I see no /var/lib/calico/nodename on worker node, only on master and in guide there was speaking only about kubectl apply -f calico.yaml on master.

Could anybody help me get rid of calico errors? Tried to search, have seen similar cases but looks like that they are about something different.

UPDATE

I've found possible networking conflict (Calico config contained 192.168.0.0/16 and my VirtualBox adapter was 192.168.56.0/24) so I reset cluster, changed Calico config and networking/podSubnet in kubeadm-config.yaml to 192.168.0.0/24 and init cluster again.

New status is following.

Seem OK:

$ kubectl get nodes  NAME                       STATUS   ROLES    AGE   VERSION  ubuntu-training-server-1   Ready    master   39m   v1.19.1  ubuntu-training-server-2   Ready    <none>   38m   v1.19.1  

Seem OK too:

$ kubectl get events  LAST SEEN   TYPE     REASON                    OBJECT                          MESSAGE  36m         Normal   Starting                  node/ubuntu-training-server-1   Starting kubelet.  36m         Normal   NodeHasSufficientMemory   node/ubuntu-training-server-1   Node ubuntu-training-server-1 status is now: NodeHasSufficientMemory  36m         Normal   NodeHasNoDiskPressure     node/ubuntu-training-server-1   Node ubuntu-training-server-1 status is now: NodeHasNoDiskPressure  36m         Normal   NodeHasSufficientPID      node/ubuntu-training-server-1   Node ubuntu-training-server-1 status is now: NodeHasSufficientPID  36m         Normal   NodeAllocatableEnforced   node/ubuntu-training-server-1   Updated Node Allocatable limit across pods  36m         Normal   NodeReady                 node/ubuntu-training-server-1   Node ubuntu-training-server-1 status is now: NodeReady  35m         Normal   RegisteredNode            node/ubuntu-training-server-1   Node ubuntu-training-server-1 event: Registered Node ubuntu-training-server-1 in Controller  35m         Normal   Starting                  node/ubuntu-training-server-1   Starting kube-proxy.  35m         Normal   Starting                  node/ubuntu-training-server-2   Starting kubelet.  35m         Normal   NodeHasSufficientMemory   node/ubuntu-training-server-2   Node ubuntu-training-server-2 status is now: NodeHasSufficientMemory  35m         Normal   NodeHasNoDiskPressure     node/ubuntu-training-server-2   Node ubuntu-training-server-2 status is now: NodeHasNoDiskPressure  35m         Normal   NodeHasSufficientPID      node/ubuntu-training-server-2   Node ubuntu-training-server-2 status is now: NodeHasSufficientPID  35m         Normal   NodeAllocatableEnforced   node/ubuntu-training-server-2   Updated Node Allocatable limit across pods  22s         Normal   CIDRNotAvailable          node/ubuntu-training-server-2   Node ubuntu-training-server-2 status is now: CIDRNotAvailable  35m         Normal   Starting                  node/ubuntu-training-server-2   Starting kube-proxy.  35m         Normal   RegisteredNode            node/ubuntu-training-server-2   Node ubuntu-training-server-2 event: Registered Node ubuntu-training-server-2 in Controller  35m         Normal   NodeReady                 node/ubuntu-training-server-2   Node ubuntu-training-server-2 status is now: NodeReady  

And here is new trouble, calico-kube-controllers-69496d8b75-gdbd7 is starting for more than half of an hour:

$ kubectl get pods --all-namespaces  NAMESPACE     NAME                                               READY   STATUS              RESTARTS   AGE  kube-system   calico-kube-controllers-69496d8b75-gdbd7           0/1     ContainerCreating   0          37m  kube-system   calico-node-8xjsm                                  0/1     CrashLoopBackOff    13         37m  kube-system   calico-node-zktsh                                  1/1     Running             0          37m  kube-system   coredns-f9fd979d6-7bkwn                            1/1     Running             0          39m  kube-system   coredns-f9fd979d6-rsws5                            1/1     Running             0          39m  kube-system   etcd-ubuntu-training-server-1                      1/1     Running             0          39m  kube-system   kube-apiserver-ubuntu-training-server-1            1/1     Running             0          39m  kube-system   kube-controller-manager-ubuntu-training-server-1   1/1     Running             0          39m  kube-system   kube-proxy-2tvjp                                   1/1     Running             0          39m  kube-system   kube-proxy-jkzbz                                   1/1     Running             0          39m  kube-system   kube-scheduler-ubuntu-training-server-1            1/1     Running             0          39m  

UPDATE 2

Details about my setup.

$ cat kubeadm-config.yaml  apiVersion: kubeadm.k8s.io/v1beta2  kind: ClusterConfiguration  kubernetesVersion: 1.19.1  controlPlaneEndpoint: "k8smaster:6443"  networking:    podSubnet: 192.168.0.0/24  

Cluster was initialized with:

kubeadm init --config=kubeadm-config.yaml --upload-cert  

Find the process listening on a port (not showing in lsof)

Posted: 26 Sep 2021 04:23 PM PDT

Update:

The port was opened by NFS. I figured this out by configuring NFS to listen on a known port.


Using Ubuntu 18.04

I want to know which process or kernel function listens on UDP port 38637.

I have researched this problem and found similar questions:

I have tried all remedies I could find...

  • Unmounted all network drives
  • Rebooted. After reboot the same port listens.
  • Ran virus/trojan scans with clamav and sophos-av
  • nmap and nc both successfully connect on the port
  • Tried all the following inspections:

netstat

Shows no process ID

➜ sudo netstat -ulpen | grep 38637  udp        0      0 0.0.0.0:38637           0.0.0.0:*                           0          36601      -  udp6       0      0 :::38637                :::*                                0          36602      -  

lsof

Shows nothing at all

➜ sudo lsof -i udp:38637  ➜ echo $?  1  

rpcinfo

Doesn't list the port of interest

➜ sudo rpcinfo -p | grep 38637  ➜ echo $?  1  

fuser

Troubling?

➜ sudo fuser -n udp 38637  Cannot stat file /proc/16147/fd/1023: Permission denied  Cannot stat file /proc/16374/fd/1023: Permission denied  Cannot stat file /proc/16375/fd/1023: Permission denied  Cannot stat file /proc/16380/fd/1023: Permission denied  Cannot stat file /proc/16381/fd/1023: Permission denied  Cannot stat file /proc/16382/fd/1023: Permission denied  Cannot stat file /proc/18061/fd/1023: Permission denied  Cannot stat file /proc/18177/fd/1023: Permission denied  Cannot stat file /proc/18183/fd/1023: Permission denied  Cannot stat file /proc/18188/fd/1023: Permission denied  Cannot stat file /proc/18189/fd/1023: Permission denied  Cannot stat file /proc/18190/fd/1023: Permission denied  

inode doesn't give me anything either

➜ sudo fuser --inode 36601  Specified filename 36601 does not exist.  

debugfs

Nada.

➜ sudo debugfs -R 'ncheck 36601' /dev/sda1  debugfs 1.44.1 (24-Mar-2018)  Inode   Pathname  

I would appreciate any other suggestions on where to look for answers on why this port listens.

Logical Volume Resize fails with insufficient free space but PV and VG has free PE

Posted: 26 Sep 2021 09:08 PM PDT

So i would like to extend a logical volume but i run into some error.

The lvextend function tells me it cannot allocate space because there is no free space on the pv, but there is.

Something i am missing here?

sudo lvextend -L 80G /dev/mapper/ncvps--vg-data  Insufficient free space: 7680 extents needed, but only 0 available    sudo pvdisplay -m    --- Physical volume ---    PV Name               /dev/mapper/sda5_crypt    VG Name               ncvps-vg    PV Size               749.76 GiB / not usable 3.00 MiB    Allocatable           NO    PE Size               4.00 MiB    Total PE              191937    Free PE               168913    Allocated PE          23024    PV UUID               XOvj5D-ClTq-gfsw-F59L-aIqN-1tdt-OMGYrI      --- Physical Segments ---    Physical extent 0 to 7151:      Logical volume  /dev/ncvps-vg/root      Logical extents 0 to 7151    Physical extent 7152 to 9199:      Logical volume  /dev/ncvps-vg/swap_1      Logical extents 0 to 2047    Physical extent 9200 to 10223:      Logical volume  /dev/ncvps-vg/home      Logical extents 0 to 1023    Physical extent 10224 to 23023:      Logical volume  /dev/ncvps-vg/data      Logical extents 0 to 12799    Physical extent 23024 to 191936:      FREE    additional information:    sudo vgdisplay -vv        Setting activation/monitoring to 1        Setting global/locking_type to 1        Setting global/wait_for_locks to 1        File-based locking selected.        Setting global/prioritise_write_locks to 1        Setting global/locking_dir to /run/lock/lvm        Setting global/use_lvmlockd to 0        Setting response to OK        Setting token to filter:3239235440        Setting daemon_pid to 608        Setting response to OK        Setting global_disable to 0        Setting response to OK        Setting response to OK        Setting response to OK        Setting name to ncvps-vg        report/output_format not found in config: defaulting to basic        log/report_command_log not found in config: defaulting to 0        Processing VG ncvps-vg H0caAG-WH7t-SNWH-QxUg-4T2R-jW0F-fi6RW8        Locking /run/lock/lvm/V_ncvps-vg RB        Reading VG ncvps-vg H0caAGWH7tSNWHQxUg4T2RjW0Ffi6RW8        Setting response to OK        Setting response to OK        Setting response to OK        Setting name to ncvps-vg        Setting metadata/format to lvm2        Setting id to XOvj5D-ClTq-gfsw-F59L-aIqN-1tdt-OMGYrI        Setting format to lvm2        Setting device to 65024        Setting dev_size to 1572354048        Setting label_sector to 1        Setting ext_flags to 1        Setting ext_version to 2        Setting size to 1044480        Setting start to 4096        Setting ignore to 0        Setting response to OK        Setting response to OK        Setting response to OK        /dev/mapper/sda5_crypt: size is 1572356096 sectors        Process single VG ncvps-vg    --- Volume group ---    VG Name               ncvps-vg    System ID                 Format                lvm2    Metadata Areas        1    Metadata Sequence No  12    VG Access             read/write    VG Status             resizable    MAX LV                0    Cur LV                4    Open LV               3    Max PV                0    Cur PV                1    Act PV                1    VG Size               749.75 GiB    PE Size               4.00 MiB    Total PE              191937    Alloc PE / Size       23024 / 89.94 GiB    Free  PE / Size       168913 / 659.82 GiB    VG UUID               H0caAG-WH7t-SNWH-QxUg-4T2R-jW0F-fi6RW8          Adding ncvps-vg/root to the list of LVs to be processed.        Adding ncvps-vg/swap_1 to the list of LVs to be processed.        Adding ncvps-vg/home to the list of LVs to be processed.        Adding ncvps-vg/data to the list of LVs to be processed.        Processing LV root in VG ncvps-vg.    --- Logical volume ---        global/lvdisplay_shows_full_device_path not found in config: defaulting to 0    LV Path                /dev/ncvps-vg/root    LV Name                root    VG Name                ncvps-vg    LV UUID                eNXazT-yprW-I2rq-K8hd-bGiy-e0Ur-XIBfEz    LV Write Access        read/write    LV Creation host, time ncvps, 2018-12-13 23:51:02 +0100    LV Status              available    # open                 1    LV Size                27.94 GiB    Current LE             7152    Segments               1    Allocation             inherit    Read ahead sectors     auto    - currently set to     256    Block device           254:1          Processing LV swap_1 in VG ncvps-vg.    --- Logical volume ---        global/lvdisplay_shows_full_device_path not found in config: defaulting to 0    LV Path                /dev/ncvps-vg/swap_1    LV Name                swap_1    VG Name                ncvps-vg    LV UUID                MKjhK4-vgMY-4ajM-twQV-fDs6-ApOl-Lifkjz    LV Write Access        read/write    LV Creation host, time ncvps, 2018-12-13 23:51:02 +0100    LV Status              available    # open                 2    LV Size                8.00 GiB    Current LE             2048    Segments               1    Allocation             inherit    Read ahead sectors     auto    - currently set to     256    Block device           254:2          Processing LV home in VG ncvps-vg.    --- Logical volume ---        global/lvdisplay_shows_full_device_path not found in config: defaulting to 0    LV Path                /dev/ncvps-vg/home    LV Name                home    VG Name                ncvps-vg    LV UUID                ybMqWA-0qcM-f6NI-oN7v-Ven9-fA3B-RSqtIn    LV Write Access        read/write    LV Creation host, time ncvps, 2018-12-13 23:51:02 +0100    LV Status              available    # open                 1    LV Size                4.00 GiB    Current LE             1024    Segments               1    Allocation             inherit    Read ahead sectors     auto    - currently set to     256    Block device           254:3          Processing LV data in VG ncvps-vg.    --- Logical volume ---        global/lvdisplay_shows_full_device_path not found in config: defaulting to 0    LV Path                /dev/ncvps-vg/data    LV Name                data    VG Name                ncvps-vg    LV UUID                TLqETm-cmUW-H40L-fUYr-evmF-VUDW-hLqIha    LV Write Access        read/write    LV Creation host, time ncvps, 2018-12-14 22:25:15 +0100    LV Status              available    # open                 0    LV Size                50.00 GiB    Current LE             12800    Segments               1    Allocation             inherit    Read ahead sectors     auto    - currently set to     256    Block device           254:4      --- Physical volumes ---    PV Name               /dev/mapper/sda5_crypt         PV UUID               XOvj5D-ClTq-gfsw-F59L-aIqN-1tdt-OMGYrI    PV Status             NOT allocatable    Total PE / Free PE    191937 / 168913          Unlocking /run/lock/lvm/V_ncvps-vg        Setting global/notify_dbus to 1  

OpenVPN server dedicated log missing log.conf?

Posted: 26 Sep 2021 07:08 PM PDT

I'm trying to run an OpenVPN server with it's own log, not just writing to the Syslog with everything else. For some reason it looks like you can only set a different log location by using a command argument, either --log or --log-append When I try to start the service with --log I get the below error:

service openvpn start --log /var/log/openvpn/openvpn-server.log Starting VPN '/var/log/openvpn/openvpn-server.log': missing /etc/openvpn//var/log/openvpn/openvpn-server.log.conf file

I've not seen anything about a log.conf file in the documentation. Can someone please point me at a working example or at least explain why it's trying to look in the path that it is?

query Kerberos encryption modes supported by AD through LDAP

Posted: 26 Sep 2021 09:08 PM PDT

In short: I need a way to retrieve the encryption modes permitted in the network security policy of a Microsoft DC.

The encryption mode is essential to creating the right set of keys for service principals in the local keytab of a host.

User accounts have the attribute msDS-SupportedEncryptionTypes that gives the modes as a bitset. This can be configured by a Windows admin through some input form. "Computer accounts" however lack this attribute unless one manually sets the attribute in LDAP. And there is no similar input form.

Now, according to the official docs that setting is inherited for each "Computer account" from the local policy. I guess what I need is to look up this policy through LDAP. But how?

Some old and all new published RemoteApps aren't appearing

Posted: 26 Sep 2021 05:08 PM PDT

So, here's our situation. We've had a RemoteApp environment for 2+ years, and I've recently been given access to help manage it. I've been trying to get a new app published so some off-site people can access an app I installed onto the host server. I published it, and it looks fine there, and it's set to allow it to appear in RD Web Access. I also checked permissions, and at least for now it's set to allow anyone who can access the collection access it, and that's fine until I get a new AD group created to manage access for it.

However, when I publish the app, I get an error saying it can't publish it on one server. The server it's throwing the error for isn't a host server, but it contains all of the other RD services (RDCM, RDCB, RDL, RDG, RDM) so it's in the collection. I'm GUESSING that this error is what's preventing the app from being fully published and made available to RD connections, but I could be off base there. I've also checked through the pre-existing RemoteApps and I noticed that another one that I should have access to also isn't showing in my list, so this issue looks like it's been around for at least a little while.

Our environment is pretty basic, but here it is:

 Server1 - Hosts all the RDS related services, including Remote Desktop Management, RemoteApp and Desktop Connection Management, Remote Desktop Connection Broker, Remote Desktop Licensing, and Remote Desktop Gateway   Server2 - RemoteApp Host   Server3 - VDI host  

I've checked through all of the RDP/TS related logs on Server1 and Server2, but I'm not seeing any errors popping out at me. And to be clear, any RemoteApps that had previously been published and available to me are still available and working. But any newly published RemoteApps, as well as at least one that had previously been published, isn't. I also added myself to an AD group that should have made a few more RemoteApps show up, but either I didn't wait long enough for AD LAN replication to occur (so unlikely), or that's broken as well.

I've checked the RemoteApps listing via the published web portal, as well as via my already configured RemoteApp connection in Win10. I'm seeing the same content in both locations.

As far as the local groups, the following groups on Server1 have the following memberships:

RDS Endpoint Servers - All 3 servers  RDS Management Servers - Server1, Network Service, RDMS, TScPubRPC, Tssdis  RDS Remote Access Servers - Server1  

Also, Get-RDRemoteApp lists all of the same published apps I see when going through Server Manager, including the ones that I don't see when going through my RemoteApp connections. And they all say ShowInWebAccess = True. And I know from a thread on Technet that there's some sort of alias character limit, but the alias for one of the new apps is only 9 characters, and some that are working are over 20 characters, so that's not it.

If anyone has any ideas I would love to hear them, since I'm kind of at a loss here.

How can I list the logged in users with PowerShell?

Posted: 26 Sep 2021 08:08 PM PDT

I'm trying to get a list of the logged on users from PowerShell. Is there a command that returns a list of the logged in users, regardless of whether their session is connected or disconnected? I'm only looking for local sessions, but remote sessions would be nice too.

Migrate Jira to MySql - Unknown system variable 'storage_engine'

Posted: 26 Sep 2021 05:08 PM PDT

I'm trying to setup Jira on a vm. I want to move from the embedded H2 database to my own mysql database.

I'm currently running mysql 5.7.x on ubuntu 16.04. However when I try to connect Jira with this server I get the following error message:

Unknown system variable 'storage_engine'  

I already tried a number of things to fix this. First off in my mysql.cnf file I added default-storage-engine = InnoDB This didn't do anything for the error.

On my Jira vm I went into /opt/atlassian/jira/atlassian-jira/WEB-INF/classes/database-defaults

and changed it to

databaseUrl=jdbc:mysql://localhost/jira?autoReconnect=true&characterEncoding=utf8&useUnicode=true&sessionVariables=default-storage-engine=InnoDB  

as well as

databaseUrl=jdbc:mysql://localhost/jira?autoReconnect=true&characterEncoding=utf8&useUnicode=true  

Again nothing changed for the error. I also went into the dbconfig.xml file and removed the sessionVariables=storage-engine=InnoDB part from the url. Again this didn't change anything.

I've also tried changing it to sessionVariables=default-storage-engine=InnoDB

Again no luck.

Does anyone have an idea how to get it to work? I don't want to create another sql setup just to run sql 5.6.x to be honest. I hope someone knows the solution to this.

Nginx conflicting server name for subdomain

Posted: 26 Sep 2021 04:17 PM PDT

I currently have a vhost running on Nginx for foo.domain.com and everything works great.

I created a new file for a new sub-domain I want to add called bar.domain.com. I use the same settings for both.

When I restart Nginx I get

Restarting nginx: nginx: [warn] conflicting server name "" on 0.0.0.0:443, ignored nginx.  

When I go to bar.domain.com I see what I am supposed to see, but when I go to foo.domain.com I see the page that bar.domain.com links to.

Foo

upstream php-handler {      server unix:/var/run/php5-fpm.sock;  }    server {          listen 80;          server_name foo.domain.com;          return 301 https://$server_name$request_uri;  }    server {          listen 443;            ssl on;          ssl_certificate      [path_foo]/cacert.pem;          ssl_certificate_key  [path_foo]/privkey.pem;            root [path]/foo;            ...  }  

Bar

server {          listen 80;          server_name bar.domain.com;          return 301 https://$server_name$request_uri;  }    server {          listen 443;            ssl on;          ssl_certificate      [path_bar]/cacert.pem;          ssl_certificate_key  [path_bar]/privkey.pem;            root [path]/bar;  }  

Where am I going wrong?

How can I move packages between Spacewalk channels?

Posted: 26 Sep 2021 08:08 PM PDT

How I can move a set of packages from one channel to another? I haven't found any option in the interface that'd let me do this.

My current workflow is to download the rpms, delete them from the channel (which appears to delete them from all of Spacewalk!), and then re-upload them with rhnpush. For obvious reasons this is clunky as hell. I must be missing something.

Which also makes me ask: How does one remove a package from a channel and not delete it from the entire System?

Postgres Restore Not Restoring Binary Data

Posted: 26 Sep 2021 04:03 PM PDT

This is a very urgent situation.

We have a postgres 9.4 database installed on a CentOs machine.

We are trying to take a backup from a Heroku Postgres database and restore it to the database on CentOs machine.

The commands we are using: pg_dump -Fc -h ec2-99-99-99-99.compute-1.amazonaws.com -p 5762 -U xyz -d dbname > file.dump

The dump created by pg_dump seems to be missing data from columns that have the type of binary (these columns contain images in binary format).

When we restore this backup to the database on CentOS using following command, all the images from the image i.e. binary type columns are missing:

pg_restore -d onlinedb ~/file.dump

We have to go live in the morning and are completely bummed by this unexpected issue.

Output of \d attachments (attachments is the problem table) command:

             Table "public.attachments"        Column      |          Type          | Modifiers   ------------------+------------------------+-----------   id               | integer                | not null   configuration_id | integer                |    style            | character varying(255) |    file_contents    | bytea                  |   

More information: The heroku postgres database from which we are creating backup is PostgreSQL 9.2.6. The one we are restoring to is PostgreSQL 9.4.

Apache/PHP ldap stops working. Requires restart of apache

Posted: 26 Sep 2021 06:07 PM PDT

I currently have a setup where users log in to a website using LDAP credentials. It's all internal so I don't really care a ton about certificates. So, in my /etc/openldap/ldap.conf file I have TLS_REQCERT never. Before adding that to the file I was always getting the error Error Binding to LDAP: TLS: hostname does not match CN in peer certificate. After adding that everything seemed to work fine. However now I'm finding that after some time, maybe a few hours to a day, the logins will fail again, and I'll start getting that error. If I restart apache everything works fine again for a while. Then the error pops up again. What could be causing this to keep happening? The server is a CentOS 6.5.

Adapting the simple Open VPN example to get multiple client

Posted: 26 Sep 2021 07:08 PM PDT

The sample OpenVPN configuration in Debian documentation https://wiki.debian.org/OpenVPN gives the following code.

Server /etc/openvpn/tun0.conf:

dev tun0   ifconfig 10.9.8.1 10.9.8.2   secret /etc/openvpn/static.key   

Client /etc/openvpn/tun0.conf:

remote your-server.org  dev tun0  ifconfig 10.9.8.2 10.9.8.1  secret /etc/openvpn/static.key  

My question: how to adapt this to handle more than one client? Without hard coding IPs on the clients?

truncated headers from varnish configuration file

Posted: 26 Sep 2021 06:07 PM PDT

I need help with Varnish, I have a varnish configuration file as default.vcl.

I can see from the output of varnishstat that hit ratio is quite high.

I've also checked varnishtop -i txurl to see what are the requests going to backend.

Now, the problem is in http headers the X-Cache header is missing and the other varnish headers.

From the default.vcl there is an option to delete those headers But I need help on how do I keep those headers in http response from varnish itself.

My default.vcl file

backend default {   .host = "127.0.0.1";   .port = "81"; }    # admin backend with longer timeout values. Set this to the same IP & port as your default server.  backend admin {    .host = "127.0.0.1";    .port = "81";    .first_byte_timeout = 18000s;    .between_bytes_timeout = 18000s;  }  # add your Magento server IP to allow purges from the backend  acl purge {    "localhost";    "127.0.0.1";  }      sub vcl_recv {      if (req.restarts == 0) {          if (req.http.x-forwarded-for) {              set req.http.X-Forwarded-For =              req.http.X-Forwarded-For + ", " + client.ip;          } else {              set req.http.X-Forwarded-For = client.ip;          }      }        if (req.request != "GET" &&        req.request != "HEAD" &&        req.request != "PUT" &&        req.request != "POST" &&        req.request != "TRACE" &&        req.request != "OPTIONS" &&        req.request != "DELETE" &&        req.request != "PURGE") {          /* Non-RFC2616 or CONNECT which is weird. */          return (pipe);      }        # purge request      if (req.request == "PURGE") {          if (!client.ip ~ purge) {              error 405 "Not allowed.";          }          ban("obj.http.X-Purge-Host ~ " + req.http.X-Purge-Host + " && obj.http.X-Purge-URL ~ " + req.http.X-Purge-Regex + " && obj.http.Content-Type ~ " + req.http.X-Purge-Content-Type);          error 200 "Purged.";      }        # switch to admin backend configuration      if (req.http.cookie ~ "adminhtml=") {          set req.backend = admin;      }        # we only deal with GET and HEAD by default      if (req.request != "GET" && req.request != "HEAD") {          return (pass);      }        # normalize url in case of leading HTTP scheme and domain      set req.url = regsub(req.url, "^http[s]?://[^/]+", "");        # static files are always cacheable. remove SSL flag and cookie      if (req.url ~ "^/(media|js|skin)/.*\.(png|jpg|jpeg|gif|css|js|swf|ico)$") {          unset req.http.Https;          unset req.http.Cookie;      }        # not cacheable by default      if (req.http.Authorization || req.http.Https) {          return (pass);      }        # do not cache any page from      # - index files      # - ...  #    if (req.url ~ "^/(index)") {  #        return (pass);  #    }        # Don't cache checkout/customer pages, product compare      if (req.url ~ "^/(index.php/)?(checkout|customer|catalog/cart/product_compare|wishlist)") {          return(pass);      }          # as soon as we have a NO_CACHE cookie pass request      if (req.http.cookie ~ "NO_CACHE=") {          return (pass);      }        # normalize Aceept-Encoding header      # http://varnish.projects.linpro.no/wiki/FAQ/Compression      if (req.http.Accept-Encoding) {          if (req.url ~ "\.(jpg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|swf|flv)$") {              # No point in compressing these              remove req.http.Accept-Encoding;          } elsif (req.http.Accept-Encoding ~ "gzip") {              set req.http.Accept-Encoding = "gzip";          } elsif (req.http.Accept-Encoding ~ "deflate" && req.http.user-agent !~ "MSIE") {              set req.http.Accept-Encoding = "deflate";          } else {              # unkown algorithm              remove req.http.Accept-Encoding;          }      }        # remove Google gclid parameters      set req.url = regsuball(req.url,"\?gclid=[^&]+$",""); # strips when QS = "?gclid=AAA"      set req.url = regsuball(req.url,"\?gclid=[^&]+&","?"); # strips when QS = "?gclid=AAA&foo=bar"      set req.url = regsuball(req.url,"&gclid=[^&]+",""); # strips when QS = "?foo=bar&gclid=AAA" or QS = "?foo=bar&gclid=AAA&bar=baz"        return (lookup);  }    # sub vcl_pipe {  #     # Note that only the first request to the backend will have  #     # X-Forwarded-For set.  If you use X-Forwarded-For and want to  #     # have it set for all requests, make sure to have:  #     # set bereq.http.connection = "close";  #     # here.  It is not set by default as it might break some broken web  #     # applications, like IIS with NTLM authentication.  #     return (pipe);  # }  #  # sub vcl_pass {  #     return (pass);  # }  #  sub vcl_hash {      hash_data(req.url);      if (req.http.host) {          hash_data(req.http.host);      } else {          hash_data(server.ip);      }      if (!(req.url ~ "^/(media|js|skin)/.*\.(png|jpg|jpeg|gif|css|js|swf|ico)$")) {          call design_exception;      }      return (hash);  }    sub vcl_hit {       return (deliver);  }      sub vcl_miss {       return (fetch);  }    sub vcl_fetch {      if (beresp.status == 500) {         set beresp.saintmode = 10s;         return (restart);      }      set beresp.grace = 5m;        # add ban-lurker tags to object      set beresp.http.X-Purge-URL = req.url;      set beresp.http.X-Purge-Host = req.http.host;        if (beresp.status == 200 || beresp.status == 301 || beresp.status == 404) {          if (beresp.http.Content-Type ~ "text/html" || beresp.http.Content-Type ~ "text/xml") {              if ((beresp.http.Set-Cookie ~ "NO_CACHE=") || (beresp.ttl < 1s)) {                  set beresp.ttl = 0s;                  return (hit_for_pass);              }                # marker for vcl_deliver to reset Age:              set beresp.http.magicmarker = "1";                # Don't cache cookies              unset beresp.http.set-cookie;          } else {              # set default TTL value for static content              set beresp.ttl = 4h;          }          return (deliver);      }        return (hit_for_pass);  }    sub vcl_deliver {      # debug info      if (resp.http.X-Cache-Debug) {          if (obj.hits > 0) {              set resp.http.X-Cache = "HIT";              set resp.http.X-Cache-Hits = obj.hits;          } else {             set resp.http.X-Cache = "MISS";          }          set resp.http.X-Cache-Expires = resp.http.Expires;      } else {          # remove Varnish/proxy header          remove resp.http.X-Varnish;          remove resp.http.Via;          remove resp.http.Age;          remove resp.http.X-Purge-URL;          remove resp.http.X-Purge-Host;      }        if (resp.http.magicmarker) {          # Remove the magic marker          unset resp.http.magicmarker;            set resp.http.Cache-Control = "no-store, no-cache, must-revalidate, post-check=0, pre-check=0";          set resp.http.Pragma = "no-cache";          set resp.http.Expires = "Mon, 31 Mar 2008 10:00:00 GMT";          set resp.http.Age = "0";      }  }    # sub vcl_error {  #     set obj.http.Content-Type = "text/html; charset=utf-8";  #     set obj.http.Retry-After = "5";  #     synthetic {"  # <?xml version="1.0" encoding="utf-8"?>  # <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"  #  "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">  # <html>  #   <head>  #     <title>"} + obj.status + " " + obj.response + {"</title>  #   </head>  #   <body>  #     <h1>Error "} + obj.status + " " + obj.response + {"</h1>  #     <p>"} + obj.response + {"</p>  #     <h3>Guru Meditation:</h3>  #     <p>XID: "} + req.xid + {"</p>  #     <hr>  #     <p>Varnish cache server</p>  #   </body>  # </html>  # "};  #     return (deliver);  # }  #  # sub vcl_init {  #   return (ok);  # }  #  # sub vcl_fini {  #   return (ok);  # }    sub design_exception {  }  

Why *do* windows print queues occasionally choke on a print job

Posted: 26 Sep 2021 05:05 PM PDT

Y'know they way windows print queues will occasionally stop working with a print job at the head of the queue which just won't print and which you can't delete? Anyone know whats going on when this happens?

I've been seeing this since the NT4 days and it still happens on 2008. I'm talking about standard IP connected laser printers - nothing fancy. I support a lot of servers and loads of workstations and see this happen a few times a year.

The user will call saying they can't print. When you examine the print queue, which in my case will generally be a server based queue shared out to the workstations, you find a print job which you cannot cancel. You also can't pause it, reinitialize it, nothing.

Stopping the spooler is the usual trick and works sometimes. However I occasionally see cases which even this doesn't cure and which a reboot is the only solution. Pause the queue, reboot, when it comes back up the job can then be deleted. Once gone the printer happily goes back to its normal state. No action is ever necessary on the printer.

I regard having to reboot as last resort and don't like it. What on earth can be going on when stopping the process (spooler) and restarting it doesn't clear a problem?

Its not linked to any manufacturer either. I've seen this on HPs, lexmark, canon, ricoh, on lasers, on plotters.... can't say I ever saw this on dot matrix.

Anyone got any ideas as to what may be going on.

Ian

Debugging rules in Iptables [duplicate]

Posted: 26 Sep 2021 05:05 PM PDT

How can I know how many packets were dropped by my iptables rules ??

Is there any debugging mechanism to see which rule is Dropping my packet or Accepting it ??

No comments:

Post a Comment