Saturday, December 4, 2021

Recent Questions - Server Fault

Recent Questions - Server Fault


in drbd 9 cant parse node-id or connection

Posted: 04 Dec 2021 10:31 PM PST

according to https://linbit.com/drbd-user-guide/drbd-guide-9_0-en/ and http://manpages.ubuntu.com/manpages/bionic/man5/drbd.conf-9.0.5.html I'd configured my drbd config . my config:

resource c_ssd1_drbd1 {        device  /dev/drbd1;      disk    /dev/pool_ssd_1/bd1;      meta-disk internal;          on NODE-1 {              address 172.*.*.120:7701;              node-id 0;      }      on NODE-2 {              address 172.*.*.121:7702;              node-id 1;      }      on NODE-3 {              address 172.*.*.122:7703;              node-id 2;      }        connection {              host NODE-1   port 7701;              host NODE-2   port 7702;              net {                      protocol C;               }      }      connection {              host NODE-1   port 7701;              host NODE-3   port 7703;              net {                      protocol A;               }      }      connection {              host NODE-2   port 7702;              host NODE-3   port 7703;              net {                      protocol A;               }      }  }  

when i wanna up my resource c_ssd1_drbd1 with command

 drbdadm up c_ssd1_drbd1  

it shows me this error:

  drbdadm up drbd.d/c_ssd1_drbd1.res:10: Parse error: 'disk | device | address |    meta-disk | flexible-meta-disk' expected,  but got 'node-id'  

and if i comment node-id after that it cant parse connection!!

why?? :((

thanks to help me...

No Bonjour even on successful OpenVPN TAP connection

Posted: 04 Dec 2021 10:00 PM PST

I have a wireless printer/scanner on a remote network that is accessible via OpenVPN server in eth-bridge mode running on Ubuntu 20.04 host (in the same remote network as the scanner of course). IP printing is fine and working, its the scanner service I need using tools on client machine(s), in this case OSX 10.15.7 via tunnelblick, thus why i have set up an ethernet-bridge on server.

Despite following OpenVPN documentation and other helpful guides, and what appears to be a working ovpn layer 2 server that is accepting clients - I am still not seeing the expected mdns broadcast from any server-side devices. As I understand, this is what is needed to use the scanner via most imaging software (image capture and vuescan for my use-case).

A few points in my troubleshooting process:

• client connects, TAP sets up and is assigned IP according to server-bridge directive, placing client in server side LAN
• Remote router visibly registers my client as a 'connected device'
• While connected, all remote hosts (including the scanner) are ping-able from client
• While connected, dns-sd -Z on osx confirms i'm not seeing any new service, i do continue to see local services however
• Client side firewall is off
• Remote scanner service is broadcasting as expected, confirmed by running avahi-browse on remote server

Maybe this specific traffic is being blocked from the TAP interface (client or server) in some other way? I have found only a handful of references to partially similar issues, none of which have seemed to provide resolution - I am hopeful for some guidance on further troubleshooting.

The following workarounds are not preferred and have been inadequate:
• Accessing the scanner's web server
• VNC to remote host to do image capturing locally relative to scanner

I am of course open to alternative methods of accomplishing the intended purpose, though OpenVPN TAP seems to be the more ubiquitous solution for this kind of thing, so what am I missing here?

Server side config, iptables and interface details below for reference:

Server Config

local 192.168.1.113  port ****  proto udp  dev tap0  ca server/ca.crt  cert server/server.crt  key server/server.key  dh server/dh.pem  auth SHA512  tls-crypt server/tc.key  server-bridge 192.168.1.1 255.255.255.0 192.168.1.201 192.168.1.240  push "redirect-gateway def1"  ifconfig-pool-persist ipp.txt  push "dhcp-option DNS 192.168.1.1"  push "route 192.168.1.0 255.255.255.0"  push "route-delay 10"  keepalive 10 120  cipher AES-256-CBC  user nobody  group nogroup  persist-key  persist-tun  verb 3  crl-verify server/crl.pem  explicit-exit-notify  

Iptables

-P INPUT ACCEPT  -P FORWARD ACCEPT  -P OUTPUT ACCEPT  -A INPUT -p udp -m udp --dport 1194 -j ACCEPT  -A INPUT -i tap0 -j ACCEPT  -A INPUT -i br0 -j ACCEPT  -A FORWARD -m state --state RELATED,ESTABLISHED -j ACCEPT  -A FORWARD -s 10.8.0.0/24 -j ACCEPT #former TUN config  -A FORWARD -i br0 -j ACCEPT  

Netplan config

network:    version: 2    renderer: NetworkManager      ethernets:      enp2s0:        dhcp4: false        dhcp6: false    bridges:      br0:        interfaces: [enp2s0]        addresses: [192.168.1.113/24]        gateway4: 192.168.1.1        mtu: 1500        nameservers:            addresses: [8.8.8.8]        parameters:            stp: true            forward-delay: 0        dhcp4: no        dhcp6: no  

Running Vagrant VM on Ubuntu 20.04 VM on VirtualBox on Windows 11 Host Machine, Need SSH access to vagrant from windows 11

Posted: 04 Dec 2021 10:37 PM PST

I am doing a computer vision project, and I have Vagrant VM on Ubuntu 20.04 VM on VirtualBox on Windows 11 Host Machine. I'd like to use the Windows 11 host machine for the CV since it needs a lot of processing power, and communicate with a python app in vagrant currently running using SSH, since it's not a very resource intensive app.

I have a port forwarded (8000 -> 8000) in virtual box to access the web interface of the app I need, and I can access it from my browser on Firefox in Windows 11.

The problem arises when I attempt to ssh into vagrant (2222->2222) putty gives an error for connection refused, windows cmd gives "kex_exchange_identification: read: Connection aborted"

I've tried getting private key, I've tried all variants of ssh commands that I could find, but it's simply not working. Inside Ubuntu I am able to simply type "vagrant ssh" and it will connect to the ssh without any delay.

Am I missing something?

I usually use WSL2 for programming in linux, but my particular application requires that it runs in vagrant in ubuntu.

port forwarding

vagrant ssh-config

  config.vm.box = "generic/ubuntu2010"      # Publically forwarded ports.    # The below ports are accessible to all machines on the same network.    # To limit access to the local network, add "host_ip".    config.vm.network "private_network", type: "dhcp"    # Eg: config.vm.network "forwarded_port", guest: 80, host: 8080, host_ip: "127.0.0.1"    config.vm.network "forwarded_port", guest: 8000, host: 8000 # Web App    config.vm.network "forwarded_port", guest: 9000, host: 9000 # Remote server    #config.vm.network "forwarded_port", guest: 2222, host: 2222 # Remote server #<--this is commented because it didn't work, and vagrant's startup already specifies that it's forwarding port 22 to port 2222.    #config.vm.network "private_network", type: "dhcp" #<-- so is this    config.ssh.username = 'vagrant'    config.ssh.password = 'test'    config.ssh.insert_key = 'false'   

Vagrant's Startup

Nevermind, I'm an idiot. Installed openssh in ubuntu and was able to access vagrant.

ERROR: NotSupportedError - The EB CLI cannot find your SSH key file for keyname

Posted: 04 Dec 2021 09:04 PM PST

Trying to deploy an app on AWS and this is only one of the hurdles I've had to deal with. I am trying to connect to an Elastic Beanstalk instance and when I attempt to connect with the awsebcli tool I get this error:

ERROR: NotSupportedError - The EB CLI cannot find your SSH key file for keyname "HFA". Your SSH key file must be located in the .ssh folder in your home directory.

I do not have this keypair. I cannot get this keypair. I do not want to use this keypair. Nothing I do in the AWS Console (including nuking the instance) will convince AWS of this. There is nothing I need in the current account (this is a free tier account for a school project and at this point I would get another one but it's the principle of the thing).

In short, is there any way I can generate another set of SSH credentials? The EB CLI will happily ask me if I want a new set but then asks for the old one when I try to connect and it's driving me nuts.

How to install memcache-top

Posted: 04 Dec 2021 07:20 PM PST

How to install memcache-top via SSH please?

It hosted on Google code: https://code.google.com/archive/p/memcache-top/downloads

Thank you.

HINT on 127: "Command not found" fail2ban

Posted: 04 Dec 2021 03:56 PM PST

I have a problem with Fail2ban. In the log I have this:

2021-12-05 00:49:23,968 fail2ban.utils [979765]: ERROR 7f9a6df8cdf0 -- stderr: '/bin/sh: 1: iptables: not found' 2021-12-05 00:49:23,968 fail2ban.utils [979765]: ERROR 7f9a6df8cdf0 -- stderr: '/bin/sh: 2: iptables: not found' 2021-12-05 00:49:23,968 fail2ban.utils [979765]: ERROR 7f9a6df8cdf0 -- stderr: '/bin/sh: 3: iptables: not found' 2021-12-05 00:49:23,968 fail2ban.utils [979765]: ERROR 7f9a6df8cdf0 -- returned 127 2021-12-05 00:49:23,969 fail2ban.utils [979765]: INFO HINT on 127: "Command not found". Make sure that all commands in 'iptables -w -N f2b-nginx-badbots\niptables -w -A f2b-nginx-badbots -j RETURN\niptables -w -I INPUT -p tcp -j f2b-nginx-badbots' are in the PATH of fail2ban-server process (grep -a PATH= /proc/pidof -x fail2ban-server/environ). You may want to start "fail2ban-server -f" separately, initiate it with "fail2ban-client reload" in another shell session and observe if additional informative error messages appear in the terminals. 2021-12-05 00:49:23,969 fail2ban.actions [979765]: ERROR Failed to execute ban jail 'nginx-badbots' action 'iptables-allports' info 'ActionInfo({'ip': '81.213.141.194', 'family': 'inet4', 'fid': <function Actions.ActionInfo. at 0x7f9a6f56eca0>, 'raw-ticket': <function Actions.ActionInfo. at 0x7f9a6f56f3a0>})': Error starting action Jail('nginx-badbots')/iptables-allports: 'Script error' 2021-12-05 00:49:23,969 fail2ban.actions [979765]: NOTICE [nginx-badbots] Restore Ban 82.66.13.48 2021-12-05 00:49:23,976 fail2ban.utils [979765]: ERROR 7f9a6df8cdf0 -- exec: iptables -w -N f2b-nginx-badbots

Can someone enlighten me?

thanks in advance

How to redirect traffic from squid to vpn?

Posted: 04 Dec 2021 02:46 PM PST

I have a windows machine with a squid server and VPN client connection(which is not the default gateway)

What I want is to redirect some traffic from squid to my default ethernet connection and some to VPN.

Ethernet adapter Ethernet:       Connection-specific DNS Suffix  . :     IPv4 Address. . . . . . . . . . . : 192.168.100.11     Subnet Mask . . . . . . . . . . . : 255.255.255.0     Default Gateway . . . . . . . . . : 192.168.100.1    PPP adapter vpn_conn:       Connection-specific DNS Suffix  . :     IPv4 Address. . . . . . . . . . . : 172.16.3.33     Subnet Mask . . . . . . . . . . . : 255.255.255.255     Default Gateway . . . . . . . . . :  

squid conf

http_port 2003  acl  user3_acl  myport 2003  tcp_outgoing_address 172.16.3.33 user3_acl    http_port 2004  acl  user4_acl  myport 2004  

2004 port works as expected through my Ethernet adapter, but redirect to vpn doesn't work,

the log contains

1638648992.630     75 33.33.333.333 NONE/503 0 CONNECT docs.microsoft.com:443 - HIER_NONE/- -  

Can't SSH into Raspberry Pi Ubuntu 20.04 [closed]

Posted: 04 Dec 2021 01:29 PM PST

I'm trying to follow this tutorial on setting up a headless Raspberry Pi 3B v1.2 With Ubuntu 20.04: https://roboticsbackend.com/install-ubuntu-on-raspberry-pi-without-monitor/#ssh_setup

The Raspberry Pi connects as expected to my WiFi hotspot from my Android smartphone, but when I try to SSH into the Raspberry Pi, it refuses the connection.

The tutorial says SSH is enabled by default as you have "ssh_pwauth: true" in the "user-data" file. So why can't I still not connect?

Intel S2600JF server: Processor PCIe Link Speed menu in the BIOS is missing

Posted: 04 Dec 2021 01:24 PM PST

i have a Intel 4 node server with S2600 JF motherboard per node (older, DDR3 RAM, E5-2600 v1 and v2). I want to insert a NVMe pcie card (with two drives) that requires bifurcation.

Based on the description of the motherboard, there is a menu item in the BIOS for this (Advanced - PCI Configuration - Processor PCIe Link Speed), but I did not find this. I updated the BIOS but still can't see, the menu item is missing.

BIOS pdf

See page 91 in pdf. Can the problem be solved? Maybe it depends on motherboard revision? Or maybe it depends on CPU version? (now E5-2620 v1 cpu included, 2 pcs)

Thank you in advance for your help, Laszlo

How can one recover/write a label (clone existing one)?

Posted: 04 Dec 2021 01:47 PM PST

ZFS stores 4 labels, 2 at the beginning of a device, 2 at the end. When they are corrupted a pool cannot be mounted.

I had a case of 3 broken labels (failed to unpack), but 1 was still intact. I could list it with zdb -lu just fine.

zpool import -d /dev/sda failed. Using -f, and/or -F, and/or -D failed.

cannot import '/dev/sda': no such pool available  

Is there any way I could copy the label #2 to the labels #0, #1, #3?

I am assuming they are redundant copies, existing to boost reliability. However, if that were true, I fail to understand why zfs wouldn't import a pool if there's at least one label left intact, and then simply restore the other three.

Background on how it came to this issue:

  1. I did the stupid thing and created two of my pools with device names such as /dev/sda instead of /dev/disk/by-uuid/1234. Honestly I don't knwo what I was thinking, because I've been there before
  2. Today I plugged in a new drive, wanting to create a new, bigger pool.
  3. Of course, the two pools that failed were those whose "sda" names shifted by one letter.
  4. Once I realized this, I rebooted without the new drive, imported just fine with the correct device names used inside the label.

Why was this reported as a label issue? The labels are still broken, even after the import, with only label 2 intact. How can I fix them?

Add-on question: Is there a tool such as zpool note-my-device-has-a-new-name /dev/sda /dev/disk/by-uuid/1234? Considering the amount of people having the issue, this seems to be helpful. Once I've got my backup of those pools updated, I'll try again.

What Windows drivers survive a wipe / reset?

Posted: 04 Dec 2021 08:40 PM PST

We are fully onboard with the modern mobile device management dream. Managing PCs with Intune and onboard them using Autopilot. These PCs are purchased with a clean install of Windows. For those we are migrating, we install a clean copy. When a PC moves between users or roles, we Wipe / reset it. We also rely on Windows Update to maintain its drivers (see Drivers 101). Normally, this works great. However, we have recently found a couple new models who's NIC and hard drive drivers are missing after a Windows reset. I am assuming the difference must either be:

  1. Those that fail had a class of driver pre-installed that does not survive the wipe.
  2. Or, for those that do work, Windows must have a default driver.

Does anyone know what the difference is or how to determine this? Bonus points for documentation I can share with our hardware vendor.

ZFS performance: Extreme low write speed

Posted: 04 Dec 2021 12:48 PM PST

I am running a small home server. The specs are:

  • CPU: AMD Ryzen 5 2600
  • RAM: 32 GB ECC
  • System drive: 128GB NVMe SSD
  • Data drives: 3x 4 TB Seagate Barracuda HDD

The server runs some applications like Nextcloud or Gitea and I want to run 1-2 VMs on it. So there are some web applications, databases and VMs.

The applications and qcow2 images are stored on a raidz1 pool:

$ sudo zpool status    pool: tank   state: ONLINE  config:            NAME        STATE     READ WRITE CKSUM          tank        ONLINE       0     0     0            raidz1-0  ONLINE       0     0     0              sdb     ONLINE       0     0     0              sdc     ONLINE       0     0     0              sdd     ONLINE       0     0     0    errors: No known data errors  

When I used the applications in the first weeks, I experienced no problems. But since a few weeks I realized extremly low write speeds. The nextcloud instance is not very fast and when I try to start a fresh VM with Windows 10 it needs about 5 Minutes to get to the login screen.

I did some performance testing using fio and got following results:

Test IOPS Bandwith (KiB/s)
random read 37,800 148,000
random write 31 127
sequential read 72,100 282,000
sequential write 33 134

I did some research before posting here and read that I should add a SLOG to the zfs pool for better performance with databases and VMs. But that's no option at the moment. I need to get christmas gifts first :D

But even without a SLOG I don't think these figures are correct :(

Does anyone have an idea? :)

Using Gitlab docker behind nginx proxy manager docker

Posted: 04 Dec 2021 10:04 PM PST

I try to setup Nginx Proxy Monitor(NGPM) as a reverse proxy for Gitlab and other websites/services. Gitlab itself is running inside a docker container that has it's own IP address. NGPM is also inside a docker container. Both containers run on an Unraid server (and were installed from the "Apps" which, in this case are prefilled docker templates)

I've tried this:
https://www.itsfullofstars.de/2019/06/gitlab-behind-a-reverse-proxy/

But this just lead to a 502 Bad Gateway error from Nginx.

Also tried some other things, but most links I find talk about decoupling the nginx from gitlab with an nginx on the same machine pointing to some gitlab stuff.

At this point I am lost to why nothing works, and am just poking around in config files without really knowing what I am doing. I don't even know what to provide you with in order to help me, so please, if you need sth. to help me with my problem I'll gladly attach that.

Edit Logs:
Error log looks like this:

2020/06/24 11:55:54 [error] 2834#2834: *1966 connect() failed (113: Host is unreachable) while connecting to upstream, client: 0.0.0.0, server: develop.company.com, request: "GET / HTTP/2.0", upstream: "http://192.168.10.170:80/", host: "develop.company.com", referrer: "http://192.168.10.135:7818/nginx/proxy"   

Access Log like this:

[24/Jun/2020:11:49:56 +0200] - 502 502 - GET https develop.company.com "/" [Client 0.0.0.0] [Length 166] [Gzip -] [Sent-to 192.168.10.170] "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:77.0) Gecko/20100101 Firefox/77.0" "http://192.168.10.135:7818/nginx/proxy"     [24/Jun/2020:11:49:56 +0200] - - 499 - GET https develop.company.com "/favicon.ico" [Client 0.0.0.0] [Length 0] [Gzip -] [Sent-to 192.168.10.170] "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:77.0) Gecko/20100101 Firefox/77.0" "-"  

Note that I changed the client IP for the purpose of uploading here. Since I'm testing it from within the network where proxy and gitlab are located, this is our external IP.

Edit Config:
Gitlab:
I tried only with this:
external_url="https://develop.company.com
But also this:
nginx['listen_port'] = 80
nginx['listen_https'] = false

I also tried the http variant for external_url.

NGPM:
enter image description here enter image description here
I also tried http with 443, https with 80, but it didn't matter (and also wouldn't have made much sense).

Apache web server won't display a webpage

Posted: 04 Dec 2021 05:01 PM PST

I've literally tried everything to get my web server broadcasted to the public but it's just not working. Every time I type my server IP in a browser like firefox it just gets stuck on "Waiting for ipaddress..." forever.

So here's what I did:

  • I completely reinstalled the operating system. I am using Centos 7 & Google Cloud

  • I installed httpd using the simple step by step guide as seen here: https://phoenixnap.com/kb/install-apache-on-centos-7

  • I verified that the server is listening on port 80 with the command:

netstat -anp | grep httpd.

sh-4.2# netstat -anp | grep httpd  tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      18025/httpd  unix  3      [ ]         STREAM     CONNECTED     23993    18025/httpd  

At first it was listening on tcp6 so I typed in the command nano /etc/httpd/conf/httpd.conf and changed "Listen 80" to "Listen 0.0.0.0:80" I still can't connect to my web server after restarting it.

I tried configuring virtual hosts by following the guide here: https://support.rackspace.com/how-to/set-up-virtual-hosts-on-centos/

I am not running IPtables.. I had no problems opening up port 80 in firewalld with the commands

sudo firewall-cmd --add-service=http --permanent  sudo firewall-cmd --add-service=https --permanent  sudo firewall-cmd ––reload  

I also tried

sudo firewall-cmd ––permanent ––add-port=80/tcp  sudo firewall-cmd ––permanent ––add-port=443/tcp  sudo firewall-cmd ––reload  

I then typed sudo firewall-cmd --list-all and saw that services http and https were listed

I still couldn't get a web page when I type the server ip in my browser so I completely disabled selinux and firewalld. It's still not working.

I installed IP tables and opened the necessary ports:

-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT  -A INPUT -p icmp -j ACCEPT  -A INPUT -i lo -j ACCEPT  -A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT  -A INPUT -i eth0 -p tcp -m tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT  -A INPUT -j REJECT --reject-with icmp-host-prohibited  -A FORWARD -j REJECT --reject-with icmp-host-prohibited  

In the VPC Network -> Firewall Rules tab in Google Cloud you can clearly see that http is open for all ip ranges

default-allow-http  Ingress  http-server  IP ranges: 0.0.0.0/0  tcp:80  Allow  1000  default  

I used a curl command in the SSH console to test the website curl -I http://localhost

HTTP/1.1 200 OK  Date: Sun, 22 Dec 2019 15:19:18 GMT  Server: Apache/2.4.6 (CentOS)  Last-Modified: Sun, 22 Dec 2019 14:54:44 GMT  ETag: "5-59a4c176b0e2b"  Accept-Ranges: bytes  Content-Length: 5  Content-Type: text/html; charset=UTF-8  

And it responded with 200 OK.. Meaning everything was configured correctly. I opened all the ports and still cannot get a page to display when I type the server IP in my firefox browser. I don't understand.... What am I doing wrong?

EDIT: Here is the error log

[Sun Dec 22 16:11:14.246082 2019] [suexec:notice] [pid 1244] AH01232: suEXEC mechanism enabled (wrapper: /usr/sbin/suexec)  [Sun Dec 22 16:11:14.262897 2019] [lbmethod_heartbeat:notice] [pid 1244] AH02282: No slotmem from mod_heartmonitor  [Sun Dec 22 16:11:14.262979 2019] [ssl:warn] [pid 1244] AH01873: Init: Session Cache is not configured [hint: SSLSessionCache]  [Sun Dec 22 16:11:14.270177 2019] [mpm_prefork:notice] [pid 1244] AH00163: Apache/2.4.6 (CentOS) OpenSSL/1.0.2k-fips configured -- resuming normal operations  [Sun Dec 22 16:11:14.270214 2019] [core:notice] [pid 1244] AH00094: Command line: '/usr/sbin/httpd -D FOREGROUND'  

Windows server 2016 Failover Cluster does not complete Forming the cluster

Posted: 04 Dec 2021 04:03 PM PST

I am trying to set up a 2 Node failover cluster using Windows server 2016. I deployed the servers in AWS. Here are the details.

I used ;

  • 1 server acting as the Domain Controller ( 10.30.10.101 ) installed a domain.(globex.local) 2 servers acting as nodes which are connected to the same domain.(NODE01, NODE02)

  • NODE01 - 10.30.10.102 NODE02 - 10.30.10.103

  • 1 server acting as the iSCSI file server. It also joined in the same domain.

  • I used one AWS region and one subnet (10.30.10.0/24) for deploying
    all my servers

Attaching the iSCSI disks are successful. Cluster Validation is also successful. But when I am going to create the Cluster, it stuck in the FORMING CLUSTER stage for a long time and gives me the following errors. I did a lot of research and I granted the domain administrators' permissions necessary to create cluster resource objects (computers). All the servers are in the same folder. While creating the cluster I can see it creates a computer with the same cluster name I gave. But it does not finish creating the cluster.

I struggled so hard to solve this but still no luck. Seeking for a solution.

I did a lot of research and I granted the domain administrators' permissions necessary to create cluster resource objects (computers). All the servers are in the same folder. While creating the cluster I can see it creates a computer with the same cluster name I gave. But it does not finish creating the cluster.

Specially, I remember when I tried to add NODEs, I only could add the local node by its netbios name. When I use the netbios name for remote node, it gives an error. I used IP addresses and then it worked. But in the tutorial videos I can see they add both NODES with their short netbios name. I am doubting if that is the problem.

I struggled so hard to solve this but still no luck. Seeking for a solution.

    Beginning to configure the cluster Cluster.    Initializing Cluster Cluster.    Validating cluster state on node NODE02.globex.local.    Searching the domain for computer object 'Cluster'.    Find a suitable domain controller for node NODE02.globex.local.    Check whether the computer object Cluster for node NODE02.globex.local exists in the domain. Domain controller \\GRI-DC.globex.local.    Bind to domain controller \\GRI-DC.globex.local.    Check whether the computer object NODE02.globex.local for node NODE02.globex.local exists in the domain. Domain controller \\GRI-DC.globex.local.    Verifying computer object 'Cluster' in the domain.    Checking for account information for the computer object in the 'UserAccountControl' flag for Cluster.    Validating installation of the Network FT Driver on node NODE02.globex.local.    Validating installation of the Cluster Disk Driver on node NODE02.globex.local.    Configuring Cluster Service on node NODE02.globex.local.    Validating installation of the Network FT Driver on node NODE01.globex.local.    Validating installation of the Cluster Disk Driver on node NODE01.globex.local.    Configuring Cluster Service on node NODE01.globex.local.    Waiting for notification that Cluster service on node NODE02.globex.local has started.    Forming cluster 'Cluster'.    Operation failed, attempting cleanup.    An error occurred while creating the cluster and the nodes will be cleaned up. Please wait...    An error occurred while creating the cluster and the nodes will be cleaned up. Please wait...  

Slow Mailbox Migration "within" Exchange 2016 databases on same server

Posted: 04 Dec 2021 07:05 PM PST

We have recently migrated from exchange 2010 to 2016. Everything went smooth until one day we had to fail Veeam replication to DR site. One of the databases crashed and had to be restored from backup. since then, this database frequently goes in dirty shutdown and has to be mounted with -AcceptDataloss switch. In short, we decided to move mailboxes from this database to a new one. There are around 175 mailboxes with 350 GB data. We are trying to migrate in batches of 10s and 15s but its verrryyyyy slow. it takes days to migrate 4-5 users. i enabled exchange throttling for this activity as recommended on " https://justaucguy.wordpress.com/2018/08/24/slow-mailbox-moves-in-exchange-2016/ " but still not getting 100%.

Anyone can give any idea or suggestion?

TONS of 4625 events. Failed login attempts. No IP, no username

Posted: 04 Dec 2021 03:04 PM PST

I have a server that gets keeps getting failed login events (4625). They occur roughly every 20-30 minutes daily. Also appears to be on a schedule.

I've tried deleting stored credentials. Disabling RDS. I've tried locating a pattern with Procmon and Wireshark, and at one point thought it might be the services for Labtech (ConnectWise Automate) but disabling this temporarily didn't make a difference.

An account failed to log on.

Subject:

Security ID:        SYSTEM    Account Name:       SERVER$    Account Domain:     DOMAIN    Logon ID:       0x3E7  

Logon Type: 3

Account For Which Logon Failed:

Security ID:        NULL SID    Account Name:           Account Domain:       

Failure Information:

Failure Reason:     Unknown user name or bad password.    Status:         0xC000006D    Sub Status:     0xC0000064  

Process Information:

Caller Process ID:  0x2f4    Caller Process Name:    C:\Windows\System32\lsass.exe  

Network Information:

Workstation Name:   SERVER    Source Network Address: -    Source Port:        -  

Detailed Authentication Information:

Logon Process:      Schannel    Authentication Package: Kerberos    Transited Services: -    Package Name (NTLM only):   -    Key Length:     0  

How to find source of inherited permission on Exchange online mailbox?

Posted: 04 Dec 2021 10:04 PM PST

Example:

Get-MailboxPermissions -Identity "<user>"  

Shows permissions with IsInherited=True Where would this permission be inherited from in Exchange online?

In on premise exchange I would use Get-MailboxDatabase and/or Get-ADPermission but these are unavailable in Exchange online.

There is a permission we want to remove, but can't because it's inherited:

WARNING: An inherited access control entry has been specified: [Rights: ReadControl, ControlType: Allow]  and was ignored on object "CN=<user>,OU=<organization>,OU=Microsoft Exchange Hosted Organizations,DC=<server>,DC=PROD,DC=OUTLOOK,DC=COM".  

"Couldn't resolve host name: Could not resolve host:" in Zabbix

Posted: 04 Dec 2021 05:01 PM PST

Getting "Couldn't resolve host name: Could not resolve host: example.zabbixagent.com; Name or service not known" in Zabbix server although DNS and hostname of Zabbix Active Agent is correct. Is this a bug or a misconfiguration in Zabbix? Please help.

snmpget error: “No Such Object available on this agent at this OID”

Posted: 04 Dec 2021 09:02 PM PST

I want to create my own MIB. I'm struggling on this from couple of weeks. I followed this tutorial and using net-snmp 5.7.3. What I'm doing is:

My setup: I have two VM's, both Ubuntu 16, one is snmp-server with IP:192.168.5.20 and the other snmp-agent with IP:192.168.5.21. I wrote a MIB, which compiles good without any error (This compilation is done only on the agent system, not on the server). I have already done this:

root@snmp-agent:# MIBS=+MAJOR-MIB      root@snmp-agent:# MIBS=+DEPENDENT-MIB      root@snmp-agent:# export MIBS      root@snmp-agent:# MIBS=ALL  

My MIB files are in this path: /usr/share/snmp/mibs which is the default search path. I've already compiled it and generated .c and .h files successfully with the command: mib2c -c mib2c.int_watch.conf objectName. And than configured the snmp like this:

root@snmp-agent:# ./configure --with-mib-modules="objectName"  root@snmp-agent:# make  root@snmp-agent:# make install      

Everything worked fine. After this when I do (on the agent) snmptranslate I get the output as:

root@snmp-agent:snmptranslate -IR objectName.0  MAJOR-MIB::objectName.0  

And with the command snmptranslate -On objectName.0 I get output as:

root@snmp-agent:# snmptranslate -On MAJOR-MIB::objectName.0  .1.3.6.1.4.1.4331.2.1.0  

So, I'm getting the expected outputs on the agent system. Now my problem is I don't know how to get the same values from my server!

When I run snmpget, from the server, I get this error:

root@snmp-server:# snmpget -v2c -c public 192.168.5.21 MAJOR-MIB::objectName.0  MAJOR-MIB::objectName.0 = No Such Instance currently exists at this OID  

Output when specified the OID:

root@snmp-server:# snmpget -v2c -c public 192.168.5.21 .1.3.6.1.4.1.4331.2.1  SNMPv2-SMI::enterprises.4331.2.1 = No Such Instance currently exists at this OID  

Output when I do these:

root@snmp-server:# snmpget -v2c -c public 192.168.5.21 sysDescr.0  SNMPv2-MIB::sysDescr.0 = STRING: Linux snmp-agent 4.10.0-33-generic #37~16.04.1-Ubuntu SMP Fri Aug 11 14:07:24 UTC 2017 x86_64    root@snmp-server:# snmpwalk -v2c -c public 192.168.5.21 .1.3.6.1.4.1.4331.2.1  SNMPv2-SMI::enterprises.4331.2.1 = No more variables left in this MIB View (It is past the end of the MIB tree)  

I have searched it and still searching but no luck. What should I do? How should I use snmpget from my server on my own MIBs? I mean something like I do with sysDescr.0 from my server.

I want to do this: snmpget 192.168.5.21 myObjectName.0 and get the values.

EDIT: I have already seen these answers, but doesn't works. snmp extend not working and snmp no such object...

UPDATE 2:

When I do snmpwalk on server:

snmp-server:# snmpwalk -v 2c -c ncs -m DISMAN-PING-MIB 192.168.5.21 .1.3.6.1.2.1.80  DISMAN-PING-MIB::pingObjects.0 = INTEGER: 1  DISMAN-PING-MIB::pingFullCompliance.15.46.49.46.51.46.54.46.49.46.50.46.49.46.56.48 = STRING: "/bin/echo"  DISMAN-PING-MIB::pingMinimumCompliance.15.46.49.46.51.46.54.46.49.46.50.46.49.46.56.48 = ""  DISMAN-PING-MIB::pingCompliances.4.15.46.49.46.51.46.54.46.49.46.50.46.49.46.56.48 = ""  DISMAN-PING-MIB::pingCompliances.5.15.46.49.46.51.46.54.46.49.46.50.46.49.46.56.48 = INTEGER: 5  DISMAN-PING-MIB::pingCompliances.6.15.46.49.46.51.46.54.46.49.46.50.46.49.46.56.48 = INTEGER: 1  DISMAN-PING-MIB::pingCompliances.7.15.46.49.46.51.46.54.46.49.46.50.46.49.46.56.48 = INTEGER: 1  DISMAN-PING-MIB::pingCompliances.20.15.46.49.46.51.46.54.46.49.46.50.46.49.46.56.48 = INTEGER: 4  DISMAN-PING-MIB::pingCompliances.21.15.46.49.46.51.46.54.46.49.46.50.46.49.46.56.48 = INTEGER: 1  DISMAN-PING-MIB::pingIcmpEcho.1.15.46.49.46.51.46.54.46.49.46.50.46.49.46.56.48 = ""  DISMAN-PING-MIB::pingIcmpEcho.2.15.46.49.46.51.46.54.46.49.46.50.46.49.46.56.48 = ""  DISMAN-PING-MIB::pingIcmpEcho.3.15.46.49.46.51.46.54.46.49.46.50.46.49.46.56.48 = INTEGER: 1  DISMAN-PING-MIB::pingIcmpEcho.4.15.46.49.46.51.46.54.46.49.46.50.46.49.46.56.48 = INTEGER: 0  DISMAN-PING-MIB::pingMIB.4.1.2.15.46.49.46.51.46.54.46.49.46.50.46.49.46.56.48.1 = ""  

When I do snmpget with pingFullCompliance.15.46.49.46.51.46.54.46.49.46.50.46.49.46.56.48:

root@snmp-server:# snmpget 192.168.5.21 DISMAN-PING-MIB::pingFullCompliance.15.46.49.46.51.46.54.46.49.46.50.46.49.46.56.48  DISMAN-PING-MIB::pingFullCompliance.15.46.49.46.51.46.54.46.49.46.50.46.49.46.56.48 = Wrong Type (should be INTEGER): STRING: "/bin/echo"  

So where am I going wrong? And what is pingFullCompliance.15.46.49.46.51.46.54.46.49.46.50.46.49.46.56.48 ? Why such a long OID?

Where am I going wrong? Can anyone point me in the right direction? Any suggestions are greatly appreciated.

Block linux bridge traffic (only one way) using iptables or ebtables

Posted: 04 Dec 2021 01:09 PM PST

I am using openwrt router. It has a bridge br-lan and wlan0, wlan1 are connected to this bridge. eth0 acts as the WAN interface. When a packet comes from wlan0 or wlan1 it goes from the bridge, gets NATed and goes out through eth0 to the internet and the reply comes from eth0, gets NATed again and goes to br-lan and then out via wlan0 or wlan1 depending on where the original packet came from.

wlan0/wlan1 --> br-lan --> NAT --> eth0 --> internet

internet --> unNAT --> br-lan --> wlan0/wlan1

Now I have an application listening on br-lan interface through a raw socket and I want to do some processing on the packets going from br-lan to wlan0/wlan1. Thus I want to stop/block all packets from br-lan to wlan0/wlan1 as I will be forwarding it to wlan0/wlan1 myself in my application. How do I do that using iptables or ebtables?

I have tried some rules like below, but it does not work and all traffic is flowing normally -

ebtables -I FORWARD -i br-lan -o wlan1 -j DROP  ebtables -I OUTPUT -o br-lan -j DROP  iptables -I FORWARD -i br-lan -o wlan1 -j DROP  iptables -I OUTPUT -o br-lan -j DROP  

Https on iis not working with domain name of ip address

Posted: 04 Dec 2021 08:04 PM PST

Using Windows 2012 R2 Standard server with IIS. Windows firewall has preset rules World Wide Web Services (HTTP Traffic-In) and World Wide Web Services (HTTPS Traffic-In) enabled. The server has one web with the following bindings:

http - empty value / any domain - 80
http - example.com - 80
https - example.com - 443
https - empty value / any domain - 443

Urls tried from external machine:
http://example.com - works
http://my.ip.address - works
https://example.com - not working
https://my.ip.address - not working

Urls tried from local server
http://example.com - works
http://localhost - works
http://my.ip.address - works
https://example.com - not working
https://localhost - works
https://my.ip.address - not working

So http works for all addresses from all locations. Https works when run on local machine with address localhost but https does not work in any other way. What am I missing? Do I need to open other firewall rules/ports other than 443?

Why does a RewriteCond %{REQUEST_URI} interfere with a second NOT condition?

Posted: 04 Dec 2021 09:02 PM PST

At first the rule that works:

DirectoryIndex index.php  ErrorDocument 403 /form.html    RewriteCond %{REQUEST_URI} ^/index\.php$  RewriteCond %{REQUEST_METHOD} !POST  RewriteRule . - [F,L]  

This means http://example.com and http://example.com/index.php can only be opened through POST.

Now the problem

I added this additional rule set:

RewriteCond %{REQUEST_URI} !^/index\.php$  RewriteRule . - [F,L]  

Now, I send a POST again to http://example.com but receive this error:

Forbidden    You don't have permission to access / on this server.  Additionally, a 500 Internal Server Error error was encountered while trying to use an ErrorDocument to handle the request.  

This does not make sense, because the rule should NOT catch requests on index.php sending a 403, but ok, I extended the second rule as follows:

RewriteCond %{REQUEST_URI} !^/form\.html$  RewriteCond %{REQUEST_URI} !^/index\.php$  RewriteRule . - [F,L]  

And sending again a POST to http://example.com returns no 500, but I still receive a 403?!

Update 1
If I remove the first rule set, the second one works alone as expected. This means only http://example.com, http://example.com/index.php and http://example.com/form.html can be accessed.

Update 2
If I use both rule sets and send my POST to http://example.com/index.php I do not receive any errors?!

So the rules interfere only if I sent a POST to the root URL. But why?

Deployment of node js app listening on two separate ports under nginx

Posted: 04 Dec 2021 07:05 PM PST

I have a simple node.js app that listens to two ports: on 8001 it sets up a simple webserver by doing

var express = require('express');  var gHttpApp = express();  gHttpApp.use(express.static('public'));  gHttpApp.listen(8080, function () {      console.log('HTTP Server listening on *:8001');  });  

Then, on 8002 it sets up socket.io

var io = require('socket.io')();  gSocket = io.listen(8002);  

In my index.html inside the /public folder, I request the socket.io client js by doing:

<script src="http://localhost:8000/socket.io/socket.io.js"></script>  

while the other js file are requested with relative path inside /public.

This setup worked while developing locally and seemed logical, but I have no idea how to deploy it on my private server which runs Ubuntu and nginx, since I can not reverse proxy the same location into 2 ports...

Install Language Pack On Windows Server Core (2012 R2)

Posted: 04 Dec 2021 06:01 PM PST

I have language packs KB3012997 and KB2839636 staged and approved in Windows Server Update Services 2012 R2, but my Windows Server Core 2012 R2 clients refuse to install it. After googling the issue, it appears that these language pack updates are unable to be installed via WSUS, and have to be manually installed on the clients via the Language Control Panel. Unfortunately the Language Control Panel is not available on the Core edition of Windows server, both control.exe input.dll and control.exe /name Microsoft.Language do not work. I've tried installing the CAB files manually with dism /online /Add-Package /Package-Name:E:\WsusContent\65\F1C5505C26603C0E907DEDD5A4B3A0E6511E44C65.cab but the updates are not registered as being installed in the WSUS console.

How can I go about getting these language packs installed on Server Core 2012 R2? Yes I know these language packs do little to nothing on Server Core. And that I could work around this issue by creating separate groups in the WSUS console for the Core and non-Core editions of Windows Server, and approving these updates only for the non-Core editions. But to satisfy my autism i'd like to get these updates installed anyways, because if they really were never intended to target Core editions of Windows Server, i'm assuming the WSUS console wouldn't say my Core servers are applicable for them. Right now the only way I can think of is using a tool like Altiris RapidInstall or Sysinternals Process Monitor to see what file/registry changes are made while adding a language pack on a non-Core edition of Windows Server, after it has already been installed with dism.exe and then applying these changes to the Core edition servers.

Reverse Proxy with Nginx showing default screen

Posted: 04 Dec 2021 03:04 PM PST

I'm trying to setup a reverse proxy to my JIRA instance using Nginx.

server {    listen 80;    server_name jira.domain.com;    location / {      proxy_set_header X-Forwarded-Host $host;      proxy_set_header X-Forwarded-Server $host;      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;      proxy_pass http://localhost:8080;    }  }  

Every time I hit the url directly, I get the default "Welcome to Nginx" page. If I refresh, it then takes me to the JIRA Dashboard. I'm having the same issue going to my Confluence box behind Nginx. What am I missing to get this to work correctly?

C#/asp.net application runs fine on localhost (intranet), but not on server (Internet)

Posted: 04 Dec 2021 04:03 PM PST

I'm a total n00b when it comes to SQL Server admin stuff, so sorry if this is basic. I've designed a website in C#/asp.net with a SQL Server backend on my local machine. Runs perfectly when I open the site through VS2010 and run it. However, when I publish it to IIS and try to run the site, the ASPX works fine but it can't connect to the data. The dropdowns are empty, and anything that deals with data (including logins) doesn't connect to the tables.

I'm assuming this has something to do with permissions? Can anyone help me?

I'm using SQL Server 2008 R2 and Visual Studio 2010, both on a WinXP machine (yeah, I know WinXP isn't built for this, but I just want to test this out before I upload it to a server I have to pay for).

bash rsync is Killed by signal 2

Posted: 04 Dec 2021 01:09 PM PST

I'm trying to prevent the user from cancelling the script by using ctrl + c. The following script executes completely, except rsync that insists on dying, displaying the error Killed by signal 2.

Is it possible to avoid rsync from dying? If so, can I put it in the background, or should it be in the foreground?

script:

trap '' SIGINT SIGTERM SIGQUIT    cd /tmp  nohup rsync  -e 'ssh -o LogLevel=ERROR' -av --timeout=10 --delete-excluded myapp.war myserver:/tmp/  < /dev/null > /tmp/teste 2> /tmp/teste2    let index=0  while [ $index -lt 400000 ]  do    let index=index+1  done    echo "script finished"  echo "index:$index"  

I'm suspecting that the ssh channel is dying before rsync. Following the end of the output of the strace command in pid of rsync:

[...]  write(4, "\374\17\0\7", 4)              = 4  select(5, NULL, [4], [4], {10, 0})      = 1 (out [4], left {9, 999998})  --- SIGINT (Interrupt) @ 0 (0) ---  --- SIGCHLD (Child exited) @ 0 (0) ---  wait4(-1, [{WIFEXITED(s) && WEXITSTATUS(s) == 255}], WNOHANG, NULL) = 12738  wait4(-1, 0x7fffaea6a85c, WNOHANG, NULL) = -1 ECHILD (No child processes)  rt_sigreturn(0xffffffffffffffff)        = 0  select(0, NULL, NULL, NULL, {0, 400000}) = 0 (Timeout)  rt_sigaction(SIGUSR1, {SIG_IGN, [], SA_RESTORER, 0x3fcb6326b0}, NULL, 8) = 0  rt_sigaction(SIGUSR2, {SIG_IGN, [], SA_RESTORER, 0x3fcb6326b0}, NULL, 8) = 0  wait4(12738, 0x7fffaea6aa7c, WNOHANG, NULL) = -1 ECHILD (No child processes)  getpid()                                = 12737  kill(12738, SIGUSR1)                    = -1 ESRCH (No such process)  write(2, "rsync error: unexplained error ("..., 72) = 72  write(2, "\n", 1)                       = 1  exit_group(255)                         = ?  Process 12737 detached  

SIP INVITE packet has WAN address rather than call manager LAN IP

Posted: 04 Dec 2021 06:01 PM PST

I am using SIP between two subnets (192.168.3.0/24 and 192.168.30.0/24) each connected via VPN.

I have a call server on 192.168.3.100, and two phones 192.168.30.118 (Ext. 3128) and 192.168.30.119 (Ext. 3126) on the remote subnet.

The WAN IP on the subnet where the call server is located is 77.0.0.81.

There is an issue with quality of service from the ISP with SIP packets, so instead of the phones communicating over the internet, we wish them to communicate over the site-to-site VPN instead (at no point should SIP and RTP packets leave the VPN).

For the invite packet #10, I can see the following inside the header captured with WireShark (source -> destination):

Source: 192.168.3.100  Destination: 192.168.30.119    INVITE sip:3126@192.168.30.119:5062 SIP/2.0  + Via: SIP/2.0/UDP 77.0.0.81:5060;branch=z9hG4bK1ddb1569;rport  + From: <sip:3128@77.0.0.81>;tag=as5c1d47d0  + To: <sip:3126@192.168.30.119:5062>  + Contact <sip:3128@77.0.0.81:5060>  + Call-ID: 132184eda2535423432dde2343243252@77.0.0.81:5060  

As far as I understand, once the call has been setup the Call Manager will hand off the conversation between the phones directly with RTP packets.

When this happens, the RTP packets try and out from the rmeote subnet, over the WAN (and not VPN) and try and connect to the address of the WAN router 77.0.0.81:5060.

What is going on here and why does the phone not continue to talk to one another over the VPN via the Call Manager(192.168.30.119 -> 192.168.3.100 <- 192.168.30.118), or even directly (192.168.30.119 <-> 192.168.30.118)?

Why is 77.0.0.81 mentioned in the INVITE packet?

No. Time    Source  Destination Protocol    Length  Info  1   0   192.168.30.119  192.168.3.100   SIP 504 Request: NOTIFY sip:192.168.3.100 |   2   0.219589    192.168.3.100   192.168.30.119  SIP 464 Status: 200 OK |   3   15.006336   192.168.3.100   192.168.30.118  SIP 578 Request: OPTIONS sip:3128@192.168.30.118:5062 |   4   15.041422   192.168.30.118  192.168.3.100   SIP 383 Status: 200 OK |   5   20.043149   192.168.30.118  192.168.3.100   SIP 508 Request: NOTIFY sip:192.168.3.100 |   6   20.263419   192.168.3.100   192.168.30.118  SIP 468 Status: 200 OK |   7   25.212516   192.168.30.118  192.168.3.100   SIP 313 Request: ACK sip:3126@192.168.3.100 |   8   25.299476   192.168.30.118  192.168.3.100   SIP/SDP 1134    Request: INVITE sip:3126@192.168.3.100 |   9   25.522622   192.168.3.100   192.168.30.118  SIP 496 Status: 100 Trying |   10  25.874887   192.168.3.100   192.168.30.119  SIP/SDP 925 Request: INVITE sip:3126@192.168.30.119:5062 |   11  25.876331   192.168.3.100   192.168.30.118  SIP 512 Status: 180 Ringing |   12  25.892092   192.168.30.119  192.168.3.100   SIP 366 Status: 100 Trying |   13  26.01489    192.168.30.119  192.168.3.100   SIP 592 Status: 180 Ringing |   14  26.234984   192.168.3.100   192.168.30.118  SIP 512 Status: 180 Ringing |   15  27.900866   192.168.30.119  192.168.3.100   SIP/SDP 782 Status: 200 OK |   16  28.066616   192.168.30.119  77.0.0.81   RTP 214 "PT=ITU-T G.711 PCMU, SSRC=0x2EB141F2, Seq=7931, Time=0, Mark"  

Moving files using sftp

Posted: 04 Dec 2021 08:04 PM PST

I am trying to move files from one location to another on the remote server using sftp below:

for i in a b c d  do  sftp $REMUSR <<EOM>>$OUT 2>&1  rename $SOURDIR/sample_${i}_[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9].gz $REMDIR  quit  EOM  :  :  done  

but i get the message

Couldn't rename file "/source/sample_a_[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9].gz" to       "/destin/": No such file or directory  

though this file exists under the /source directory which i verified:

ls -l sample_a_[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9].gz  -rw-r--r--  1 prd admin 112 May 23 09:16 sample_a_20140330.gz  

Pls help

No comments:

Post a Comment