Friday, January 14, 2022

Recent Questions - Server Fault

Recent Questions - Server Fault


Require authentication from subset of mynetworks in postfix

Posted: 14 Jan 2022 04:54 AM PST

Is it possible to configure smtp for postfix to require authentication, (e.g. with smtp_sasl_auth_enable = yes), for certain IP ranges but have other ranges un-authenticated?

For our local network we want hosts to be able to relay through the smtp servers without authentication, including sending externally. That network range is listed in mynetworks.

But for other ranges, not on our network but in mynetworks, it would be better to require authentication to smtp.

Is this possible?

How to force users to change their Windows Hello Pin

Posted: 14 Jan 2022 04:53 AM PST

We changed our password policy in the Microsoft Endpoint Manager and now require a longer PIN.

The issue is, in testing we noticed you're only asked to change the Windows Hello PIN, when logging in with it. Since many of our users use biometric logins, they aren't asked to change it.

I'm looking for a solution where the user is asked to change the PIN regardless of the sign-in method. Just like when the PIN expires.

WHMCS Bridge Wrdpress plugin _ Order loop

Posted: 14 Jan 2022 03:47 AM PST

I post here, because for severals days, I'm trying to use the WHMCS Bridge plugin for Wordpress and I'm encoutering an issue.

I think it's maybe a misconfiguration of whmcs or wordpress.

This is my configuration : Wordpress : 5.8.3 WHMCS Bridge : Free version 6.3 WHMCS : 8.3.1

This is my problem :

I configured the WHMCS bridge as following :

Wordpress :

WHMCS URL   : https://whmcs.mydomain.com  Scope WHMCS CSS : checked  jQuery library : WHMCS  Load WHMCS style : checked  Load WHMCS invoice style : checked  Footer : Site  

WHMCS (server) :

Company Name    My sweet company  Email Address   billing@mydomain.com  Domain  https://whmcs.mydomain.com  WHMCS System URL    https://whmcs.mydomain.com/  System Theme    : six  Maintenance Mode    Disabled  Friendly URLs Basic URL's  

This is my problem :

When I visit my integrated store : https://www.mywordpress.com/whmcs-bridge/ And I select a product , I'm redirecting on /whmcs-bridge/?ccce=cart, and the order summary block is empty. enter image description here And when I click on the button 'Continue / order', I'm redirecting on : https://www.mywordpress.com/whmcs-bridge/?ccce=cart&a=confproduct&i=2&systpl=six, And visualy I have : enter image description here

I have no idea to fix it . Could you help me ?

How do I increase Physical Memory Usage limit in cPanel?

Posted: 14 Jan 2022 03:43 AM PST

In cPanel, I noticed some error messages on on the sidebar, including an error saying that Physical Memory Usage has reached its limit. I just upgraded my Centos 6 server to CloudLinux this week, and haven't noticed any errors like that before. My server has 32G of RAM, so I suspect that there is a setting somewhere that restricts my user account to only 1G of RAM. I don't know where to find that setting though.

I've changed memory_limit to 2G in my php settings in WHM, in case that had something to do with it, but the problem still persists.

I suspect that somewhere in WHM, there is a way to increase the limits for each of the 3 errors I'm experiencing, but I need help finding them.

enter image description here

AWS Lightsail LAMP PHP7 mod-rewrite

Posted: 14 Jan 2022 03:03 AM PST

I just installed a fresh AWS Lightsail LAMP stack with php7.4 and a lets encrypt ssl Docs say Mod-rewrite is installed/enabled by default however I get a 404 when I go to any url on my site that is domain.com/contact but of course domain.com/contact.php works

I looked at my bitnami-ssl.conf file and see a line that has a Allow Override All which I believe should make the mod_rewrite work? enter image description here

I am not sure what else I need? Suggestions how to troubleshoot? thank you

Linux Vagrant machine stops working after some time on Windows

Posted: 14 Jan 2022 03:02 AM PST

I have a server with Windows Server 2012 R2 and also a second server with Windows Server 2019 Essentials. I have installed VirtualBox 6.1 and a web application inside this vagrant. On both servers, the application is running just fine. However, I am facing an issue with various vagrant commands(halt, ssh, reload). Problem is that when I restart the whole windows machine and run vagrant up, everything works fine, a web app is running and I am able to use vagrant ssh. But after some time(10 days) when I try to run any of the vagrant command it shows me an error:

There was an error while executing `VBoxManage`, a CLI used by Vagrant  for controlling VirtualBox. The command and stderr is shown below.    Command: ["showvminfo", "2506e300-6742-49c3-8e39-f645ef1d3563"]  

But vagrant machine is still running fine inside because I am able to access a webpage.

I also try to run showvminfo command but nothing special is showing there. I also try to run VirtualBox and check machines there:

  • on the first server, the machine is not even listed inside VirtualBox GUI
  • on the second server, the machine is listed in GUI but an error Callee RC: REGDB_E_READREGDB (0x80040150)

It is frustrating because the machine is running fine but I am not able to ssh in it and I am forced to restart the windows machine to resolve the issue.

Vagrantfile:

# -*- mode: ruby -*-  # vi: set ft=ruby :    Vagrant.configure("2") do |config|      config.vm.box = "ubuntu.box"      # install docker    config.vm.provision :docker      config.vm.provider "virtualbox" do |v|      v.memory = 16384      v.customize ["modifyvm", :id, "--ioapic", "on"]      v.cpus = 4      v.customize ["modifyvm", :id, "--natdnshostresolver1", "on"]      v.customize ["modifyvm", :id, "--natdnsproxy1", "on"]    end      config.vm.network "forwarded_port", guest: 443, host: 443    config.vm.network "forwarded_port", guest: 80, host: 80    config.vm.network "private_network", ip: "10.10.10.10"  end  

How can I resolve the issue or trace the problem?

Connecting to Dropbear SSH using keyfile not possible (Permission denied (publickey))

Posted: 14 Jan 2022 02:53 AM PST

I'm running my home server (Ubuntu 20.04 LTS) with encrypted root and try to use dropbear in initramfs to be able to unlock it remotely during boot. To setup the remote unlocking ability I was following basically this guide: How to install LUKS encrypted Ubuntu 18.04.x Server and enable remote unlocking

On my MacBook I've successfully created a pair of ssh keys:

(base) myuser@myMBP ~ % ssh-keygen -t rsa -b 4096  

After that I have been adding the newly generated public key to /etc/dropbear-initramfs/authorized_keys:

myuser@myserver:~$ cat /etc/dropbear-initramfs/authorized_keys   no-port-forwarding,no-agent-forwarding,no-x11-forwarding,command=/bin/cryptroot-unlock ssh-rsa <here-goes-my-ssh-pubkey> myuser@myMBP.local  

After that, running myuser@myserver:~$ sudo update-initramfs -c -k all all went fine.

However, trying to login to the Dropbear SSH server doesn't work:

(base) myuser@myMBP ~ % ssh -i ~/.ssh/id_rsa -o "HostKeyAlgorithms ssh-rsa" -p 9999 root@192.168.xxx.xxx -vvv  OpenSSH_8.6p1, LibreSSL 2.8.3  debug1: Reading configuration data /etc/ssh/ssh_config  debug1: /etc/ssh/ssh_config line 21: include /etc/ssh/ssh_config.d/* matched no files  debug1: /etc/ssh/ssh_config line 54: Applying options for *  debug2: resolve_canonicalize: hostname 192.168.xxx.xxx is address  debug3: expanded UserKnownHostsFile '~/.ssh/known_hosts' -> '/Users/myuser/.ssh/known_hosts'  debug3: expanded UserKnownHostsFile '~/.ssh/known_hosts2' -> '/Users/myuser/.ssh/known_hosts2'  debug1: Authenticator provider $SSH_SK_PROVIDER did not resolve; disabling  debug3: ssh_connect_direct: entering  debug1: Connecting to 192.168.xxx.xxx [192.168.xxx.xxx] port 9999.  debug3: set_sock_tos: set socket 3 IP_TOS 0x48  debug1: Connection established.  debug1: identity file /Users/myuser/.ssh/id_rsa type 0  debug1: identity file /Users/myuser/.ssh/id_rsa-cert type -1  debug1: Local version string SSH-2.0-OpenSSH_8.6  debug1: Remote protocol version 2.0, remote software version dropbear_2019.78  debug1: compat_banner: no match: dropbear_2019.78  debug2: fd 3 setting O_NONBLOCK  debug1: Authenticating to 192.168.xxx.xxx:9999 as 'root'  debug3: send packet: type 20  debug1: SSH2_MSG_KEXINIT sent  debug3: receive packet: type 20  debug1: SSH2_MSG_KEXINIT received  debug2: local client KEXINIT proposal  debug2: KEX algorithms: curve25519-sha256,curve25519-sha256@libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group14-sha256,ext-info-c  debug2: host key algorithms: ssh-rsa  debug2: ciphers ctos: chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com  debug2: ciphers stoc: chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com  debug2: MACs ctos: umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1  debug2: MACs stoc: umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1  debug2: compression ctos: none,zlib@openssh.com,zlib  debug2: compression stoc: none,zlib@openssh.com,zlib  debug2: languages ctos:   debug2: languages stoc:   debug2: first_kex_follows 0   debug2: reserved 0   debug2: peer server KEXINIT proposal  debug2: KEX algorithms: curve25519-sha256,curve25519-sha256@libssh.org,ecdh-sha2-nistp521,ecdh-sha2-nistp384,ecdh-sha2-nistp256,diffie-hellman-group14-sha256,diffie-hellman-group14-sha1,kexguess2@matt.ucc.asn.au  debug2: host key algorithms: ecdsa-sha2-nistp256,ssh-rsa,ssh-dss  debug2: ciphers ctos: aes128-ctr,aes256-ctr,aes128-cbc,aes256-cbc,3des-ctr,3des-cbc  debug2: ciphers stoc: aes128-ctr,aes256-ctr,aes128-cbc,aes256-cbc,3des-ctr,3des-cbc  debug2: MACs ctos: hmac-sha1-96,hmac-sha1,hmac-sha2-256  debug2: MACs stoc: hmac-sha1-96,hmac-sha1,hmac-sha2-256  debug2: compression ctos: zlib@openssh.com,none  debug2: compression stoc: zlib@openssh.com,none  debug2: languages ctos:   debug2: languages stoc:   debug2: first_kex_follows 0   debug2: reserved 0   debug1: kex: algorithm: curve25519-sha256  debug1: kex: host key algorithm: ssh-rsa  debug1: kex: server->client cipher: aes128-ctr MAC: hmac-sha2-256 compression: none  debug1: kex: client->server cipher: aes128-ctr MAC: hmac-sha2-256 compression: none  debug3: send packet: type 30  debug1: expecting SSH2_MSG_KEX_ECDH_REPLY  debug3: receive packet: type 31  debug1: SSH2_MSG_KEX_ECDH_REPLY received  debug1: Server host key: ssh-rsa SHA256:vdFXJflh1ltg2QQ6A8S5qnjtPBtKR3h6l548DAh6Hwk  debug3: put_host_port: [192.168.xxx.xxx]:9999  debug3: put_host_port: [192.168.xxx.xxx]:9999  debug3: record_hostkey: found key type RSA in file /Users/myuser/.ssh/known_hosts:4  debug3: load_hostkeys_file: loaded 1 keys from [192.168.xxx.xxx]:9999  debug1: load_hostkeys: fopen /Users/myuser/.ssh/known_hosts2: No such file or directory  debug1: load_hostkeys: fopen /etc/ssh/ssh_known_hosts: No such file or directory  debug1: load_hostkeys: fopen /etc/ssh/ssh_known_hosts2: No such file or directory  debug1: Host '[192.168.xxx.xxx]:9999' is known and matches the RSA host key.  debug1: Found key in /Users/myuser/.ssh/known_hosts:4  debug3: send packet: type 21  debug2: set_newkeys: mode 1  debug1: rekey out after 4294967296 blocks  debug1: SSH2_MSG_NEWKEYS sent  debug1: expecting SSH2_MSG_NEWKEYS  debug3: receive packet: type 21  debug1: SSH2_MSG_NEWKEYS received  debug2: set_newkeys: mode 0  debug1: rekey in after 4294967296 blocks  debug1: Will attempt key: /Users/myuser/.ssh/id_rsa RSA SHA256:XizaS2UPC7m5C37NwgUVI8uPvLBzLINvRSBnpKGkzPE explicit  debug2: pubkey_prepare: done  debug3: send packet: type 5  debug3: receive packet: type 6  debug2: service_accept: ssh-userauth  debug1: SSH2_MSG_SERVICE_ACCEPT received  debug3: send packet: type 50  debug3: receive packet: type 51  debug1: Authentications that can continue: publickey  debug3: start over, passed a different list publickey  debug3: preferred publickey,keyboard-interactive,password  debug3: authmethod_lookup publickey  debug3: remaining preferred: keyboard-interactive,password  debug3: authmethod_is_enabled publickey  debug1: Next authentication method: publickey  debug1: Offering public key: /Users/myuser/.ssh/id_rsa RSA SHA256:XizaS2UPC7m5C37NwgUVI8uPvLBzLINvRSBnpKGkzPE explicit  debug3: send packet: type 50  debug2: we sent a publickey packet, wait for reply  debug3: receive packet: type 51  debug1: Authentications that can continue: publickey  debug2: we did not send a packet, disable method  debug1: No more authentication methods to try.  root@192.168.xxx.xxx: Permission denied (publickey).  

Why trying to solve the issue, I stumbled across the following Ubuntu Bug: Dropbear initramfs hook creates authorized_keys file in an invalid folder but applying the suggested workaround did not help in my case...

Do you have any clue what could be going wrong?

Thanks in advance, sandman

Postfix Send / Receive Email on Primary Domain [closed]

Posted: 14 Jan 2022 03:41 AM PST

I'm managing my Ubuntu server using VirtualMin. VirtualMin is configured to use Postfix for email. My DNS configuration is as follows:

DNS Configuration

Sending / receiving emails on admin@server1.somewhere.xyz works.

Sending / receiving emails on admin@somewhere.xyz does not work.

Is there DNS or Postfix configuration required to get emails to work from admin@somewhere.xyz?

dnf update with nobest option

Posted: 14 Jan 2022 02:25 AM PST

I've got following error during update:

Error: Problem: cannot install the best update candidate for package glibc-gconv-extra-2.28-167.el8.x86_64

  • nothing provides glibc-common = 2.28-181.el8 needed by glibc-gconv-extra-2.28-181.el8.x86_64

  • nothing provides glibc(x86-64) = 2.28-181.el8 needed by glibc-gconv-extra-2.28-181.el8.x86_64 (try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages)

It's caused by our staging system (katello) which does not yet provide that package for this stage. The "--no-best" option would let me update the system.

How can I estimate in such cases which effects will have this way of updating to my systems and how can I decide wheter to provide this package or to simply use "--nobest" at any time?

Rsyslog Not received logs from network devices

Posted: 14 Jan 2022 03:57 AM PST

I recently provisioned a new rsyslog server, everything is good from network/Linux/security side but still not receiving logs from network devices. Can you please suggest me what could be the issue?

Below is the rsyslog.conf file:

#### MODULES ####  module(load="imuxsock" # provides support for local system logging (e.g. via logger command)  SysSock.Use="off") # Turn off message reception via local log socket;  # local messages are retrieved through imjournal now.  module(load="imjournal" # provides access to the systemd journal  StateFile="imjournal.state") # File to store the position in the journal  #module(load="imklog") # reads kernel messages (the same are read from journald)  #module(load="immark") # provides --MARK-- message capability    # Provides UDP syslog reception  # for parameters see http://www.rsyslog.com/doc/imudp.html  module(load="imudp") # needs to be done just once  input(type="imudp" port="514")  input(type="imudp" port="8514")    # Provides TCP syslog reception  # for parameters see http://www.rsyslog.com/doc/imtcp.html  module(load="imtcp") # needs to be done just once  #input(type="imtcp" port="514")  input(type="imtcp" port="24514")    #### GLOBAL DIRECTIVES ####  # Where to place auxiliary files  global(workDirectory="/var/lib/rsyslog")    # Use default timestamp format  module(load="builtin:omfile" Template="RSYSLOG_TraditionalFileFormat")    # Include all config files in /etc/rsyslog.d/  include(file="/etc/rsyslog.d/*.conf" mode="optional")    #### RULES ####  # Log all kernel messages to the console.  # Logging much else clutters up the screen.  #kern.* /dev/console  # Log anything (except mail) of level info or higher.  # Don't log private authentication messages!  *.info;mail.none;authpriv.none;cron.none /var/log/messages    # The authpriv file has restricted access.  authpriv.* /var/log/secure    # Log all the mail messages in one place.  mail.* -/var/log/maillog    # Log cron stuff  cron.* /var/log/cron    # Everybody gets emergency messages  *.emerg :omusrmsg:*    # Save news errors of level crit and higher in a special file.  uucp,news.crit /var/log/spooler    # Save boot messages also to boot.log  local7.* /var/log/boot.log    # ### sample forwarding rule ###  #action(type="omfwd"  # An on-disk queue is created for this action. If the remote host is  # down, messages are spooled to disk and sent when it is up again.  #queue.filename="fwdRule1" # unique name prefix for spool files  #queue.maxdiskspace="1g" # 1gb space limit (use as much as possible)  #queue.saveonshutdown="on" # save messages to disk on shutdown  #queue.type="LinkedList" # run asynchronously  #action.resumeRetryCount="-1" # infinite retries if host is down  # Remote Logging (we use TCP for reliable delivery)  # remote_host is: name/ip, e.g. 192.168.0.1, port optional e.g. 10514  #Target="remote_host" Port="XXX" Protocol="tcp")  #destination d_logzilla  *.* @xx.xx.xx.xx:514  #destination d_new_logzilla  *.* @xx.xx.xx.xx:514  

Connect to home network and corporate network at the same time? [migrated]

Posted: 14 Jan 2022 02:16 AM PST

When I connect to my corporate network on Endpoint VPN, I can no longer see my local network resources (RDP to them, or connect to local shares). I am on a Windows laptop. Is there a way to have my laptops ethernet connect to Endpoint VPN and then have my laptops WiFi connect to my home local network? Alternatively, if the networking can't be split in that way, could I have a secondary USB WiFi or Ethernet adapter that I plug into the laptop that I can get traffic to/from my local network, while simultaneously being connected to my corporate network?

I'm not particularly savvy with the more complex aspects of setting up routing tables and things like that, so is there an easy way to achieve something like the above, maybe an app that can say "dedicate this ethernet-or-WiFi to corporate Endpoint VPN, but keep this ethernet-or-WiFi for local network traffic"?

HPE SSD Slow performance

Posted: 14 Jan 2022 02:11 AM PST

we are facing a extermly slow server. we are using SSD RAID 6 but cannot seems to find the cause why.

after doing some research its appear something related to the RAID and SSD.

Server is HPE Gen 9 DL 360

Memory is 64GB

RAID 6 SSD HPE Disks

HPE vMware 7.X

when a documents is iopen its takes ages or sometimes the server respond couples of seconds later.

is my first post if i missed something apologies.

Kubernetes enable feature gate

Posted: 14 Jan 2022 02:11 AM PST

I am trying to setup kubernetes on Debian Bullseye, I have a few pods running already however to get futher I need to set up my own private registry. It'll run fine insecurely but as soon as I add the env variables for TLS it crashes with an error.

I've been trying to turn on Ephemeral Containers so I can debug the container however I can't get it to work and I can't find any documentation to do it without involving other technology such as minikube which I don't want to use.

I've tried adding - --feature-gates=EphemeralContainers=true to the command section of kube-apiserver.yaml, kube-controller-manager.yaml and kube-scheduler.yaml in /etc/kubeernetes/manifests/, then rebooting

I tried changing ExecStart in /etc/systemd/system/kubelet.service.d/10-kubeadm.conf to ExecStart=/usr/bin/kubelet --feature-gates=EphemeralContainers=true $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS, then rebooting.

I also tried adding --feature-gates='EphemeralContainers=true' to KUBELET_KUBEADM_ARGS= in /var/lib/kubelet/kubeadm-flags.env

Any help would be very appreciated, if you need anymore infromation just let me know.

When I execute a script in crontab the output is in a different language

Posted: 14 Jan 2022 02:54 AM PST

I am executing a script in a crontab that writes a log, when I run it as root, the log is written in spanish.

But when I run it in the crontab, the output is in english.

Is there a way to run the crontab with the same configuration I do when I run it as root?

robo vsan cluster one node is almost full and the other half

Posted: 14 Jan 2022 04:23 AM PST

I have a 2 node vsan cluster. One server got an alarm something like vsan disk errors. But everything still works, so i am moving vms off of this storage.

But now I find that the "healthy" server is only half full, while the one with the error is 94% full, after I removed a lot of data.

My understanding is that these 2 servers were supposed to be mirrored. How can I fix this? Although I pay for Vmware support, due to hardware compatibility I could not upgrade past esxi6.0 and Vmware won't offer support.

IPv6 connectivity suddenly lost, IPv6 neighbour router status becomes STALE at the same time. How can I avoid it?

Posted: 14 Jan 2022 04:59 AM PST

I have a VM on a host with bridged networking (hence, with its own MAC address). Both host and VM run CentOS. Their network is managed by simple /etc/sysconfig/network-scripts/ifcfg-enpXsY files which contains the static IP addresses. IPv4 works just fine.

I have assigned an IPv6 address to the VM (the host also has one) which is routed correctly in the data centre. Most connections use IPv4, however (no DNS AAAA entry for the machine yet, still testing IPv6).

When I boot up the VM it has full IPv6 connectivity. However, after a while IPv6 connectivity stops working (IPv6 magic?). I have narrowed to problem down to neighbour (ARP/NDISC cache) data:

IPv6 not working, cannot ping or connect by IPv6 in or out, then I see:

# ip -6 neighbour   fe80::1 dev enp1s2 lladdr 0c:86:72:2e:04:28 router STALE  

Fix/workaround to refresh the cache:

# ip -6 neighbour flush dev enp1s2  # ip -6 neighbour  (empty, as expected)  

Then ping6 the host from within the VM to fill the cache:

# ping6 2912:1375:23:9a6c::2  PING 2912:1375:23:9a6c::2(2912:1375:23:9a6c::2) 56 data bytes  64 bytes from 2912:1375:23:9a6c::2: icmp_seq=1 ttl=64 time=2.35 ms  64 bytes from 2912:1375:23:9a6c::2: icmp_seq=2 ttl=64 time=0.468 ms  ^C  # ip -6 neighbour  fe80::1 dev enp1s2 lladdr 0c:86:72:2e:04:28 router REACHABLE  2912:1375:23:9a6c::2 dev enp1s2 lladdr 08:21:4b:b7:f8:31 DELAY  

IPv6 neighbour/ARP table restored to validity and connectivity is working in and out!

So my questions are:

  1. Why does the cache become stale?
  2. What can I do to avoid it?'
  3. Why/how does the command above fix it?

Of course I could run those commands in a cron job (how often?) but I suppose that cannot really be needed for IPv6 to work in general?

PS: I used a script for tests: The IPv6 stack breaks down about every 20 minutes. Can that be explained by RFCs?

PPS: Firewall config (shortened output, hopefully all relevant bits):

# ip6tables -nvL  Chain INPUT (policy DROP 0 packets, 0 bytes)   pkts bytes target     prot opt in     out     source               destination            9023  709K ACCEPT     icmpv6    !lo    *       ::/0                 ::/0                  Chain OUTPUT (policy DROP 0 packets, 0 bytes)   pkts bytes target     prot opt in     out     source               destination            9360  785K ACCEPT     icmpv6    *      !lo     ::/0                 ::/0                  

So, ICMPv6 accepted in/out on the VM. Do I need to check filtering on the host?

Red Hat - Berkeley DB library - Corrupted DB

Posted: 14 Jan 2022 04:02 AM PST

I am getting the below errors while executing the yum or rpm command.

error: rpmdb: BDB0113 Thread/process 22448/139817567954752 failed: BDB1507 Thread died in Berkeley DB library error: db5 error(-30973) from dbenv->failchk: BDB0087 DB_RUNRECOVERY: Fatal error, run database recovery error: cannot open Packages index using db5 - (-30973) error: cannot open Packages database in /var/lib/rpm CRITICAL:yum.main: Error: rpmdb open failed

I believe this was happening because of corrupted RPM DB. I have tried to execute yum or rpm commands after rebuilding the RPM database. On that time, it works properly. But after some days, the same error occurs again

Let me know how to fix this permanently.

Thanks in Advance,

Security of a hardware token vs software token for two-factor authentication

Posted: 14 Jan 2022 03:43 AM PST

Surprisingly I don't see this question on ServerFault already. I'm wondering about the pros and cons of hardware token vs software token for two-factor authentication - only in the context of security, not convenience. I am referring only to the time-based one-time password generators.

Is there a clear winner in terms of security? Does it vary according to the platform the software token (app) is installed on?

.net core 3 with Nginx reverse proxy redirect to port 5001, but no page is loaded

Posted: 14 Jan 2022 03:01 AM PST

I'm trying to setup an ubuntu 18.04 droplet to run .Net Core 3.1 Web app. I'm following this tutorial.

So far I have the nginx working (or at least I can see the nginx welcome page) if I write the droplet IP in a browser. I have created the /var/www/html/example.com folder with my .Net Core application inside and its working. I have change my dns cache (on my local machine) to redirect example.com to my droplet IP.

But when I put example.com in the browser I get redirected to example.com:5001 with ERR_CONNECTION_REFUSED

The nginx access.log got this:

186.XXX.X.XX - - [03/May/2020:03:07:52 +0000] "GET / HTTP/1.1" 307 0 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.129 Safari/537.36"

This is my nginx configuration /etc/nginx/sites-available/example.com

server {     listen 80;     server_name example.com *.example.com;    location / {     proxy_pass http://localhost:5001;     proxy_http_version 1.1;     proxy_set_header Upgrade $http_upgrade;     proxy_set_header Connection keep-alive;     proxy_set_header Host $host;     proxy_cache_bypass $http_upgrade;     proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;     proxy_set_header X-Forwarded-Proto $scheme;     }  }  

I'm pretty new in cloud systems and server setup so I'm kind of lost. I just want to get a server working, so I can upload some code and practice .Net Core coding. So any help would be nice!

Thank u in advice!

How to run openstack components' cli without SSL validation?

Posted: 14 Jan 2022 05:06 AM PST

(I use IPv6_Address instead of real IP address)

.openrc setting:

export OS_CLOUD=mycloud  export OS_USERNAME=myusername  export OS_PASSWORD=mypassword  export OS_PROJECT_NAME=myproject  export OS_AUTH_URL=https://[IPv6_Address]:5000/v3  

If set this config in the clouds.yml file:

  mycloud:      identity_api_version: "3"      region_name: RegionOne      verify: False      auth:        auth_url: https://[IPv6_Address]:5000/v3        user_domain_name: "Default"        project_name: "myproject"        project_domain_name: "default"  

Run openstack server list can work. But this time run nova list got

No handlers could be found for logger "keystoneauth.identity.generic.base"  ERROR (SSLError): SSL exception connecting to https://[IPv6_Address]:5000/v3/auth/tokens: HTTPSConnectionPool(host='IPv6_Address', port=5000): Max retries exceeded with url: /v3/auth/tokens (Caused by SSLError(SSLError("bad handshake: Error([('SSL routines', 'tls_process_server_certificate', 'certificate verify failed')],)",),))  

Try nova list --insecure got

/usr/lib/python2.7/site-packages/urllib3/connectionpool.py:847: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings    InsecureRequestWarning)  /usr/lib/python2.7/site-packages/urllib3/connectionpool.py:847: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings    InsecureRequestWarning)  /usr/lib/python2.7/site-packages/urllib3/connectionpool.py:847: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings    InsecureRequestWarning)  /usr/lib/python2.7/site-packages/urllib3/connectionpool.py:847: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings    InsecureRequestWarning)  usage: nova [--version] [--debug] [--os-cache] [--timings]              [--os-region-name <region-name>] [--service-type <service-type>]              [--service-name <service-name>]              [--os-endpoint-type <endpoint-type>]              [--os-compute-api-version <compute-api-ver>]              [--os-endpoint-override <bypass-url>] [--insecure]              [--os-cacert <ca-certificate>] [--os-cert <certificate>]              [--os-key <key>] [--timeout <seconds>] [--collect-timing]              [--os-auth-type <name>] [--os-auth-url OS_AUTH_URL]              [--os-system-scope OS_SYSTEM_SCOPE] [--os-domain-id OS_DOMAIN_ID]              [--os-domain-name OS_DOMAIN_NAME] [--os-project-id OS_PROJECT_ID]              [--os-project-name OS_PROJECT_NAME]              [--os-project-domain-id OS_PROJECT_DOMAIN_ID]              [--os-project-domain-name OS_PROJECT_DOMAIN_NAME]              [--os-trust-id OS_TRUST_ID]              [--os-default-domain-id OS_DEFAULT_DOMAIN_ID]              [--os-default-domain-name OS_DEFAULT_DOMAIN_NAME]              [--os-user-id OS_USER_ID] [--os-username OS_USERNAME]              [--os-user-domain-id OS_USER_DOMAIN_ID]              [--os-user-domain-name OS_USER_DOMAIN_NAME]              [--os-password OS_PASSWORD]              <subcommand> ...  error: unrecognized arguments: --insecure  Try 'nova help ' for more information.  

If don't have SSL certificate file local to connect to openstack https api, how to run nova, glance commands? Is there a .novarc, .glancerc config file to use them?

I also tried to create a nova.rc file with the same configuration as openrc then source it. But the same.

docker-compose no such service: myapp

Posted: 14 Jan 2022 03:36 AM PST

I've run an image with: 'docker-compose up'

With 'docker ps' i get:

CREATED             STATUS              PORTS                    NAMES  55e1fd18acf1        simpleappnodedocker_web   "node app.js"            6 seconds ago       Up 6 seconds        0.0.0.0:9000->3000/tcp   myapp  9879ff20e241        postgres:9.6              "docker-entrypoint..."   36 hours ago        Up 36 hours         0.0.0.0:5432->5432/tcp   nd-db  

I try run the bash to enter to the shell, but i get an error, how to solve this, i thinking i'm doing something wrong.

$docker-compose run myapp /bin/bash     ERROR: No such service: myapp  

docker-compose.yml:

version: '2'  services:    web:      container_name: myapp      build: .      command: node app.js      ports:        - "9000:3000"  

Disable global routing with OpenVPN

Posted: 14 Jan 2022 04:29 AM PST

I managed to install the openvpn using the script [1] and able to connect on MAC OSX.

However, the default option is all traffic now route thru the VPN IP.

Is it possible to route traffic using this VPN only when the destination IP is X.X.Y.Z

For the reset of traffic, just use it without the VPN.

[1] https://github.com/Nyr/openvpn-install

MySQL SSL: SSL_CTX_set_default_verify_paths failed

Posted: 14 Jan 2022 04:05 AM PST

I have been trying for a few days in get SSL working with MySQL.

This is the setup I currently have:

MySQL 5.7.17-0ubuntu0.16.04.1  

This is the error I am receiving when I start MySQL Server

Failed to set up SSL because of the following SSL library error: SSL_CTX_set_default_verify_paths failed

Configuration File:

sl-ca = /etc/mysql-ssl/ca-cert.pem  ssl-cert = /etc/mysql-ssl/server-cert.pem  ssl-key = /etc/mysql-ssl/server-key.pem  ssl  

I read this post, Chown the files, checked to see if SELinux was enabled (not installed)

I have also ran these commands and get the following responses:

sudo -u mysql cat /etc/mysql-ssl/ca-cert.pem  -----BEGIN CERTIFICATE-----    sudo -u mysql cat /etc/mysql-ssl/server-cert.pem  -----BEGIN CERTIFICATE-----    sudo -u mysql cat /etc/mysql-ssl/server-key.pem  -----BEGIN RSA PRIVATE KEY-----  

At this point I am running out of ideas on where to turn next. Can anybody point me in the right direction.

Getting Connection error while running AWS commands in powershell in windows 2012

Posted: 14 Jan 2022 02:00 AM PST

We have installed AWS powershell tools with the given below version. When we try to run the AWS commands we get connection error as specified below:

---- AWS Powershell version---    PS C:\Users> Get-AWSPowerShellVersion  AWS Tools for Windows PowerShell  Version 2.3.8.1  Copyright 2012-2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.    Amazon Web Services SDK for .NET  Version 2.3.8.1  Copyright 2009-2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.    --- PSVersion    4.0 ------  

.

--- Error ---    ConnectionError: ('Connection aborted.', error(10060, 'A connection attempt failed because the connected party did not p  roperly respond after a period of time or established connection failed because connected host has failed to respond'))  2016-08-23 09:43:30,944 - MainThread - awscli.clidriver - DEBUG - Exiting with rc 255    ('Connection aborted.', error(10060, 'A connection attempt failed because the connected party did not properly respond a  fter a period of time or established connection failed because connected host has failed to respond'))    PS C:\Users> curl http://ec2.eu-central-1.amazonaws.com  curl : Unable to connect to the remote server  At line:1 char:1  + curl http://ec2.eu-central-1.amazonaws.com  + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~      + CategoryInfo          : InvalidOperation: (System.Net.HttpWebRequest:HttpWebRequest) [Invoke-WebRequest], WebExc     eption      + FullyQualifiedErrorId : WebCmdletWebResponseException,Microsoft.PowerShell.Commands.InvokeWebRequestCommand  

Port 80 and 443 are opened appropriately in the SGs. Tried with/without exporting the proxy in the machine.

web server / iis won't install on server 2012

Posted: 14 Jan 2022 05:06 AM PST

I'm trying to install Web server role / iis to my dynamic Windows 2012 cloud server but keep getting the following errors:

If I have the following not installed and try to install web server / iis 8.0 it keeps telling me I have a restart pending. If I go to tools I can see iis manager and open it and navigate to local host however the role is not added to the server and on restarting it all disappears.

  • "Windows Process Activation Services"

If I install the above and try and install I get the following error code which google has not revealed much

  • 0x800f0922

I have so far tried the following - deleting the main folder on drive C: - tried installing via powershell and command promt - removing iis by running the following command "start /w pkgmgr.exe /uu:IIS-WebServerRole;WAS-WindowsActivationService" - I have also run the following command: dism /online /cleanup-image /restorehealth - I have cleared / deleted the updates to see if something was in corrupted. - I have also ran the .NET fix application.

Other roles still install on the system.

Has anyone got any ideas what else can try before I have to wipe the system which seems a bit over kill and a lot of work as this is a live production server.

Thanks in advance. J

Centos logrotate found error, skipping

Posted: 14 Jan 2022 02:00 AM PST

One of our servers has a access_log which is nearly 5GB in size - there is currently no log rotation so I enabled it yesterday for httpd

The contents of /etc/logrotate.d/httpd is

/var/log/*.log {      weekly      missingok      notifempty      sharedscripts      delaycompress      postrotate          /sbin/service httpd reload > /dev/null 2>/dev/null || true      endscript  }  

When logrotate runs it generates an error:

Anacron job 'cron.daily' on    /etc/cron.daily/logrotate:    error: found error in /var/log/*.log , skipping  

I cannot see what the error might be as these all look like valid parameters - any idea what is the issue?

Ubuntu 14.04 Failing to join domain for Integration with Active Directory (winbind & samba)

Posted: 14 Jan 2022 03:01 AM PST

I've followed the tutorial at this link https://help.ubuntu.com/community/ActiveDirectoryWinbindHowto

Everything seems to be configured somewhat correctly, net rpc join worked and the realm is listed when entering the command "realm list" but I am still getting an error when trying to net ads join

kinit works and gives me a ticket shown in klist. wbinfo -g gives no output. wbinfo -a user%pass gives:

plaintext password authentication succeeded  challenge/response password authentication failed  Could not authenticate user jball with challenge response  

sudo net ads testjoin -S domain.dc.com -U username -d 3 returns a bunch of errors such as failed to resolve _ldap._tcp..... (Success) and Failed to send DNS query (NT_STATUS_UNSUCCESSFUL). It successfully contacts the LDAP server, but ends in an error message saying

kinit succeeded but ads_sasl_spnego_krb5_bind failed: Invalid credentials  Join to domain is not valid: LDAP_INVALID_CREDENTIALS  

If any more information is needed or if you would like me to post any config files please let me know, I will respond asap. Any help would be greatly appreciated, thanks.

PHP Session Storage in Fault Tolerant Memcached Pool

Posted: 14 Jan 2022 04:02 AM PST

I recently had the opportunity to move a web application from using a Nginx proxy "loadbalancer" to an F5 loadbalancer. Unfortunately during that migration it became clear that the memcached session storage needed to move from the Nginx proxy server to "somewhere". My thinking is that I should put memcached on all 3 of the web servers (the servers that sit behind the F5 in a pool) and use php-memcache or php-memcached to save sessions. Here's the trouble:

I've tried both php-memcache and php-memcached and can neither one to behave properly if one of the servers goes down. My latest attempt was with this configuration:

memcached version 2.2.0 with the configuration settings:

  session.save_handler = memcached  session.save_path    = "172.29.104.13:11211,172.29.104.14:11211"  

I have nothing special in memcached.ini other than extension=memcached.so.

With this configuration on both server 1 and 2 (I removed 3 temporarily to test), I point JMeter at the F5 VIP and start traffic. I can see memcached.log (the daemon) on both systems, though haven't spent time to decipher, start running.

Then if I stop one of the memcached daemons, traffic begins failing and my return is

session_start(): Write of lock failed

by the memcached that is left remaining.

At the end of the day my goal is simple - I need to be able to a) not run memcached on a single server (single point of failure), and the cluster needs to be resilient to a failure of a pool member.

I've also tried php-memcache but it too fails. For php-memcache the configuration looks like this:

memcache version 3.0.8 (beta) with the configuration settings:

   session.save_handler = memcache  session.save_path    = "tcp://172.29.104.13:11211, tcp://172.29.104.14:11211"  

and in memcache.ini:

  extension=memcache.so  [memcache]  memcache.dbpath="/var/lib/memcache"  memcache.maxreclevel=0  memcache.maxfiles=0  memcache.archivememlim=0  memcache.maxfilesize=0  memcache.maxratio=0  memcache.hash_strategy=consistent  memcache.allow_failover=1  memcache.session_redundancy=2  

The error here is simply invalid session token (implying to me that the server that was remaining didn't have the session token actually stored, meaning, replication of saving the session wasn't active).

I have not looked at putting session persistence back on the F5, though as a last resort I could do so, and clients trying to connect to the lost member would have to reauthenticate.

rsync of >4GB files

Posted: 14 Jan 2022 03:57 AM PST

silly one,

do you have any problems with rsync'ing large [ >4GB ] files under modern linux? [ 32bit, 64bit, large file support turned on ]? i've done some tests on my own between 2 64bit boxes and didn't have any problems transferring 6-10GB files. to make test thorough i altered files, run rsync again, checked md5... - all seems ok.

but after i saw this bug report i got a bit worried. i did some searching but have not found any confirmation of the problem.

thanks for your thoughts!

edit: file system: ext3, reiserfs

No comments:

Post a Comment