Sunday, May 22, 2022

Recent Questions - Server Fault

Recent Questions - Server Fault


Procmail sends an auto reply but doesn't deliver to Inbox

Posted: 22 May 2022 03:29 PM PDT

I started an auto response recipe for certain user hours ago.
I got the auto reply sent and the original mail delivered into Inbox. I tested it several times.
I made some minor changes in formail and now auto reply is sent but the original mail is not delivered to Inbox.
I went back to the first recipe but the problem remains.
I also tried changing the sender address, checked spam marked mails and maillog and cannot realize what happened.

The first recipe:

:0  * ^From.*user@domain.tld  * !^FROM_DAEMON  * !^FROM_MAILER  * !^X-Loop: me@me.tld  | (formail -rk \      -A "X-Loop: me@me.tld" \      -A "Precedence: junk"; \      echo "Testing";\      echo "This is an automated response";\      echo "Not sure to see your message";\      echo "So please try again tomorrow" ) | $SENDMAIL -t -oi   

Changes I had done were in formail -rt (instead of -rk) and remove of -A "Precedence: junk"

Nginx locations, try_files and headers issue

Posted: 22 May 2022 02:22 PM PDT

So I've replaced Passenger with Puma for a Rails app, and i just noticed that i now have issues with the cdn assets, they now give CORS errors.

Back when i was using Passenger i had the following configs for Nginx:

server {      server_name mysite.com;    root /var/www/mysite.com/public;      client_max_body_size 4000M;    passenger_enabled on;    rails_env production;      location ~* ^/cdn/ {      add_header Access-Control-Allow-Origin *;      expires 364d;      add_header Pragma public;      add_header Cache-Control "public";      break;    }      location ~* ^/assets/ {      # Per RFC2616 - 1 year maximum expiry      # http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html      expires 1y;      add_header Cache-Control public;        # Some browsers still send conditional-GET requests if there's a      # Last-Modified header or an ETag header even if they haven't      # reached the expiry date sent in the Expires header.      add_header Last-Modified "";      add_header ETag "";      break;    }      listen 443 ssl; # managed by Certbot    #the rest of the certbot ssl stuff    }  

I then changed the configs to this to make it work with Puma and unix sockets:

upstream puma {    server unix:///var/www/mysite.com/shared/sockets/puma.sock;  }  server {      server_name mysite.com;    root /var/www/mysite.com/public;      client_max_body_size 4000M;      location / {      try_files $uri @app;    }      location ~* ^/cdn/ {      add_header Access-Control-Allow-Origin *;      expires 364d;      add_header Pragma public;      add_header Cache-Control "public";      break;    }      location ~* ^/assets/ {      # Per RFC2616 - 1 year maximum expiry      # http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html      expires 1y;      add_header Cache-Control public;        # Some browsers still send conditional-GET requests if there's a      # Last-Modified header or an ETag header even if they haven't      # reached the expiry date sent in the Expires header.      add_header Last-Modified "";      add_header ETag "";      break;    }      listen 443 ssl; # managed by Certbot    #ssl stuff      location @app {      proxy_pass http://puma;        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;      proxy_set_header X-Forwarded-Proto $scheme;      proxy_set_header X-Forwarded-Proto https;      proxy_set_header Host $http_host;        proxy_headers_hash_max_size 512;      proxy_headers_hash_bucket_size 128;        proxy_redirect off;    }    }  

This works fine but then i noticed that the cdn urls were giving 404, so i updated the cdn location to this (i added try_files $uri @app;):

  location ~* ^/cdn/ {      add_header Access-Control-Allow-Origin *;      expires 364d;      add_header Pragma public;      add_header Cache-Control "public";      try_files $uri @app;      break;    }  

This now works but i get CORS errors so it seems the headers are not getting set.

My guess is the try_files ignores what was set before it is called, so i tried setting the proxy header for access control inside the location @app but i still get the errors.

What's the correct way to go about this?

VXLAN L3 over Wireguard L3, with VLAN-VNI Mapping

Posted: 22 May 2022 01:57 PM PDT

Hoping this is the right place - I originally posted on Network Engineering but it got closed and I was pointed to Server Fault.

I am currently attempting to setup a L2 bridge between two sites using VXLAN to provide the L2 connectivity and Wireguard as transport/L3. I've previously done a Layer 2 bridge like this using GRE over Wireguard and it's been rock-solid, but I'm trying to better understand VXLAN now, and am looking to replace the GRE tunnel with VXLAN.

I've been trying to make use of the info both here and here but for the life of me I can't get traffic to pass over the non-wireguard IPs between sites.

I have two Debian machines with bridge-utils installed. They're also running nftables with rules to drop all DHCP traffic as when I first setup the GRE tunnel I ended up with machines getting assigned IPs from the remote network. But everything else is set to allow and it's only exposed externally via the Wireguard port

Host A is setup with:

Wireguard wg0 - 172.30.100.1/24  Bridge br0 - 10.0.0.160/24  

Host B is setup with:

Wireguard wg0 - 172.30.100.2/24  Bridge br0 - 10.1.0.160/24  

The AllowedIPs on the Wireguard configs is only for the Wireguard subnet 172.30.100.0/24. This was working with the GRE config and I'd assume would work with VXLAN too, as the VXLAN traffic is encapsulated within the Wireguard tunnel. The hosts can ping and ssh each other on their Wireguard IPs, so that bit is working fine.

The bridges both have port ens18, bridge-vlan-aware yes and bridge-vids 1-4096 in /etc/network/interfaces

I have a script based on the 'Recipe 2' from the first link I posted above, i.e. a single tunnel with multiple VNIs. The idea is that the script adds the VXLAN interface vx0 to br0, which waits until after wg0 is up (using a systemd service), and then loops to do the VLAN/VNI mapping.

#!/bin/bash    # Gets Wireguard interface IP address.  wgip=`ip a s wg0 | egrep -o 'inet [0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}' | cut -d' ' -f2`    ip link add vx0 type vxlan dstport 4789 external local $wgip dev wg0 # Creates vxlan with wg0 IP as local  #Here is where I may be going wrong but I've tried various combinations ^^^    sleep 1  ip link set dev vx0 master br0 # Adds vxlan to bridge  bridge link set dev vx0 vlan_tunnel on # Enables vlan tunnel on vxlan    # Maps each VLAN to VNIs across the tunnel.  for vlan in 10 20 30; do          bridge vlan add vid $vlan dev vx0          bridge vlan add vid $vlan dev ens18          bridge vlan add dev vx0 vid $vlan tunnel_info id $vlan    # Think I can remove the below line if I switch to BGP-EVPN for learning later?  bridge fdb append 00:00:00:00:00:00 dev vx0 vni $vlan dst 10.1.0.160    done    ip link set dev vx0 up  

I may be completely on the wrong track here, but if there's anything that looks off in the above, any guidance in the right direction would be greatly appreciated!

(It might even just be down to routing rather than config. Wireguard config is set to Table=off which I did in the GRE/WG config)

Procmail formail From header

Posted: 22 May 2022 01:46 PM PDT

I have this Procmail recipe to auto respond to a unique user (testing)
ME: me@me.tld
SENDER: user@domain.tld
The header of the auto respond has From: my_system_username@host.me.tld
If the sender if user@gmail.com the auto reply is bounced "...this message is 550-5.7.1 likely unsolicited mail".

If I send an email from my client app (Thunderbird) from me@me.tld to the SENDER user@domain.tld it is not bounced; its header contains From: me@me.tld.

I don't know if that is the problem or if there are other missing headers.
How to solve that?
I tried adding -A "From: me@me.tld" but it is not possible adding a 'fake' From

:0  * ^From.*user@domain.tld  * !^FROM_DAEMON  * !^FROM_MAILER  * !^X-Loop: me@me.tld  | (formail -rt \      -A "X-Loop: me@me.tld"; \      echo "Testing";\      echo "This is an automated response";\      echo "Not sure to see your message";\      echo "So please try again tomorrow" ) | $SENDMAIL -t -oi   

Openvpn(on Pfsense) behind NAT, not connecting

Posted: 22 May 2022 01:46 PM PDT

I have Pfsense firewall behind a NAT gateway . Huawei router --> pfsense --> LAN network

i have setup openvpn on pfsense with wizard, forwarded ports from Huawei router to pfsense WAN port of openvpn. i can see incoming packets coming to openvpn but not going out somehow.

Is there any special configuration required to accomplish this? i want to use pfsense as remote access vpn server for remote clients.

What program is being executed upon time command?

Posted: 22 May 2022 01:59 PM PDT

In my terminal in Ubuntu, I want to execute the time program with option -v, but it fails:

$ time -v ls  -v: command not found  

However if I specify the path /usr/bin/time of the program as such, it works:

$ /usr/bin/time -v ls  foo  bar  baz          Command being timed: "ls"          User time (seconds): 0.00          ...  

So it seems that time and /usr/bin/time are different executables. But to my surprise, when trying to identify time with which, it tells that it is the same:

$ which time  /usr/bin/time  

I am puzzled, can someone explain what is happening?
Is the result of which somehow not correct?

Wrong results after DNS Zone transfer with NS records being updated properly

Posted: 22 May 2022 12:26 PM PDT

I migrated a DNS Zone example.org from one provider (foo.tld) to another one (bar.tld) one week ago.

To test the successful transition, I created a new TXT record test.example.org on bar's backend with a nonempty value.

The result is unexpected and strange, no matter what nameserver I am using (Google, Cloudflare, my ISP):

  • If I query dig example.org NS, I get the (updated) result of dns1.bar.tld, dns2.bar.tld
  • If I query dig example.org SOA, I get the old result from foo.tld, even though it only as a TTL of 21600, and a week has passed
  • If I query dig test.example.org TXT, no record is found

If I directly query bar.tld's NS servers (e.g. dig example.org SOA @dns1.bar.tld) everything works properly. How can it be, that the NS records are valid, but neither the SOA record, nor the newly created record are propery found / updated?

I tried to invalidate the Google DNS & Cloudflare caches, but it didn't help.

Wireguard Client to Client issues

Posted: 22 May 2022 02:11 PM PDT

Server: Ubuntu

  • Wireguard server all clients connect to
  • Runs SMB share: all clients can access when the VPN is connected
  • Clients can ping eachother

Client a: Windows Server 2022

  • Firewall: Allow 192.168.6.0/24
  • IIS *:80
    • Works locally, works on VPN Server (wget), does not work on client b. Client b can access IIS over the server's public IP address, not the VPN address
  • SQL Server
    • configured to allow remote connections, client b can't access it over VPN ip.

Client b: Windows 11

  • Can ping client a, can't access IIS, can't access SQL Server

Added public IP address of client b to firewall of client a, after that, I can connect to SQL server over the public IP address, not the VPN IP.

Clients all have AllowedIPs = 192.168.6.0/24 in their config

Any advice welcome

Shall I do load balancing in nginx, pm2 or both?

Posted: 22 May 2022 12:21 PM PDT

PM2 allows to run NodeJS apps on multiple instances, i.e., different cores, allowing load balancing using the same port.

PORT=3000 pm2 start -i NUMBER_OF_CORES(e.g 2) app.js  

But I could also do load balancing in Nginx with different ports

upstream app_servers {      server 127.0.0.1:3000;      server 127.0.0.1:3001;      server 127.0.0.1:3002;      server 127.0.0.1:3002;  }    server {      listen 80;      server_name your-domain.com www.your-domain.com;      location / {          proxy_set_header   X-Real-IP $remote_addr;          proxy_set_header   Host      $http_host;          proxy_pass         http://app_servers;      }  }  

and then

pm2 start app.js -f --3000  pm2 start app.js -f --3001  pm2 start app.js -f --3002  pm2 start app.js -f --3003  

Which is the best idea (I always assume the localhost does all the service)?

  • simply load balancing the same port on different instances (cores)
  • simply load balancing on different ports and let OS manage instances, or
  • load balancing by having different instances, each with a different port, thus using both Nginx and PM2 load balancers?

How to clone ESXi USB boot pendrive without shutting down the host

Posted: 22 May 2022 11:59 AM PDT

I have few ESXi (vSphere) servers running from USB sticks. As USB sticks are not very reliable I would like to prepare myself for those USB devices failure. Sure I could just made configuration backup with vicfg-cfgbackup and in case of any USB failure just reinstall ESXi on new USB stick and restore configuration. A much better option (in term of restoration time) is preparing in advance spare USB sticks with installed ESXi - either by doing clean install on spare USB pendrive and restoring configuration or by just cloning original USB boot pendrive. Both methods requires to temporarily shutdown an ESXi host, which is not an option in my case. So I would like to clone USB pendrive using dd command issued from within ESXi shell (without any down time). The problem is I could not find/identify device name representing my USB boot device.

Could anyone give me some advice in this regard? elk

iLO 3 Video frozen

Posted: 22 May 2022 11:58 AM PDT

So i got a HPE DL360 G7 and updated the iLO3 firmware to 1.94 (from 1.70). With neither of these verions i could see the remote console. I get the lower half of the screen if i start the remote console when something is displayed, but it does not update if the screen changes. The keyboard works. I have resetted the iLO and the nVRAM already but now I am out of ideas. I got an advanced license.

thanks in advance

Dell PowerEdge R410: Format SAS NetApp drives to 512B sector size for use in RAID

Posted: 22 May 2022 10:52 AM PDT

Thanks for helping out. Some time ago I got two 400GB NetApp drives for use in my Poweredge server. I attempted for a while to get them to work but they would not join a RAID disk group. I found this is because the drives have been formatted with 520 byte sector size, and they need to be 512 bytes. I attempted to format the drives but I cannot access them within the stored operating system (Proxmox) or a live USB of Mint or Ubuntu.

I'm looking for some way to somehow directly access these drives from some Linux system so I can run the necessary commands to format the drive sector sizes, but if there's a better way to do that I'm open to suggestions.

The only way I can access these drives are through the front hard drive slots of my server. I have no other device that allows SAS đrives. Unless there's some way inside the server, which there did not appear to be but I could have glanced over it.

I've tried for a while to get the SAS NetApp drives to work but to of no avail, so I'm hoping that any of you can give me hints or help me out, feel free to ask questions. Thanks again.

New install of Debian 11.3.0 freezes after cryptsetup: set up successfully

Posted: 22 May 2022 10:39 AM PDT

I installed Debian 11.3.0, upon first boot, it appears to crash right after entering the LUKS password. No oops or kernel panic or other errors or warnings.

The freeze occurs immediately after this:

cryptsetup: sda3_crypt: set up successfully

No more text. The screen remains on. No blinking cursor. Scroll lock, caps lock, num lock, does not work.

There are no terminals available, alt+f2, f3, etc. does not work.

This is a server, there is no X environment that should have attempted to start, I didn't even install the desktop environment packages. Perhaps they get pulled in anyway and it does indeed attempt to start X?

In any case, is this a common issue? Is there a simple fix for it?

bash script – MySQL commands and variables

Posted: 22 May 2022 03:43 PM PDT

I have a bash script who automate a Nextcloud server installation.

To run MySQL commands I use the mysql -e command

user@hostname:~$ mysql -e "CREATE DATABASE 'NextcloudDataBaseName'"  

I would like store db name, user name, password, etc. In variables

How write mysql -e commands with variables ?

user@hostname:~$ mysql -e "CREATE USER '$UserName'@'localhost' IDENTIFIED BY '$UserPass'"    user@hostname:~$ mysql -e "CREATE USER ${UserName}@'localhost' IDENTIFIED BY ${UserPass}"  

The two command above don't run

Error message with user password (test password) :

ERROR 1064 (42000) at line 1: You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near 'qwerty1234'  

This syntax seems to work :

user@hostname:~$ mysql -e "CREATE USER '${DbUserName}'@'localhost' IDENTIFIED BY '${DbUserPass}'"  

Simple quotes around variable names but is this the best syntax, the best practice ?

Thanks :-)

Restore Windows XP clonezilla image (Error 0x00007B) [closed]

Posted: 22 May 2022 12:06 PM PDT

Our charity association has updated some management tools to their most recent versions, which are cloud-based. They still want, though, to somehow archive the older version that they were using for "historical purposes", hence the question regarding old hardware and software platforms.

I have used Clonezilla to copy an old hard drive (IDE) containing a Windows XP system. I restored (correctly, I think, since no error was shown) the image on a VirtualBox machine, and if I start the machine with a Live distribution I can access the restored drive's content.

The disk itself, however, does not boot; better, it tries to boot, but fails with the aforementioned error code (0x0000007B).

From my research, it seems that the system is unable to boot due to a mismatch between the uuid (although I don't know if that term makes sense in this context) of the hard drive on which the system was originally installed and the one belonging to the virtual machine.

But I could also be completely wrong.

I also tried booting with a Windows XP disk, but even the repair console doesn't seem to have any effect (chkdsk /f /r, fixmbr and fixboot show that everything is fine).

Has anyone any idea on how I could fix this issue?

Create an NFQUEUE rule to match a local addresses destination in my raspberry pi router

Posted: 22 May 2022 11:21 AM PDT

I'm working on a project to verify the source of each packet if its destination is one of several IPs on the LAN network. I'm interested in the LAN IPs, not the WAN.

I tried to create many matches like the following but nothing worked.

iptables -t nat -d <list of IPs> -A FORWARD -j NFQUEUE --queue-num 1  

I have used the following rules to enable routing in my raspberry pi

sudo iptables -F    sudo iptables -t nat -F    sudo iptables -t nat -A POSTROUTING -o $eth -j MASQUERADE    sudo iptables -A FORWARD -i $eth -o $wlan -m state --state RELATED,ESTABLISHED -j ACCEPT    sudo iptables -A FORWARD -i $wlan -o $eth -j ACCEPT  

The question is where should I put the NFQUEUE rule?

-EDIT-

I have been told to enable proxy_arp, so that any local requests are being responded to by the raspberry pi router. I believe I have to set up the routing tables inside the raspberry pi, don't I?

Any thoughts will be appreciated.

In Windows Server, check the Task Manager, the GlassFish's User name changed from SYSTEM to the login user 's name after some subcommands

Posted: 22 May 2022 12:52 PM PDT

Here's an issue I encountered when I was running GlassFish 5.0 on Windows Server 2016.

Check from the Task Manager -> Details -> User name, we know that, at the very beginning, the User name for GlassFish is SYSTEM. Then after the following two operations, the User name is changed from SYSTEM to the login user's name.

  1. Stop the domain by asadmin stop-domain.
  2. Start the domain by asadmin start-domain.

With the User name is the login user's name, if I logout from the Windows Server, the domain will be stopped. However, what I want is to logout without stop the domain.

Could anyone let me know how I can avoid this situation and why?

Tomcat authentification failed - 401 Unauthorized

Posted: 22 May 2022 12:03 PM PDT

I have an error when trying to access to Tomcat management console (Apache Tomcat/8.0.50).

WARNING [http-apr-8080-exec-7] org.apache.catalina.realm.LockOutRealm.filterLockedAccounts An attempt was made to authenticate the locked user "admin"

I changed configuration file "tomcat-users.xml" and restarted tomcat

<tomcat-users xmlns="http://tomcat.apache.org/xml"                xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"                xsi:schemaLocation="http://tomcat.apache.org/xml tomcat-users.xsd"                version="1.0">  <user username="admin" password="!w@B@#7XTFj" roles="manager-gui,admin-gui" />  

Information about "realm" in server.xml

<Realm className="org.apache.catalina.realm.LockOutRealm">          <!-- This Realm uses the UserDatabase configured in the global JNDI               resources under the key "UserDatabase".  Any edits               that are performed against this UserDatabase are immediately               available for use by the Realm.  -->          <Realm className="org.apache.catalina.realm.UserDatabaseRealm"                 resourceName="UserDatabase"/>        </Realm>  

and about "Resources"

<GlobalNamingResources>      <!-- Editable user database that can also be used by           UserDatabaseRealm to authenticate users      -->      <Resource name="UserDatabase" auth="Container"                type="org.apache.catalina.UserDatabase"                description="User database that can be updated and saved"                factory="org.apache.catalina.users.MemoryUserDatabaseFactory"                pathname="conf/tomcat-users.xml" />                       <Resource name="jdbc/ERS"                          auth="Container"                          type="javax.sql.DataSource"                          driverClassName="oracle.jdbc.OracleDriver"                          url="jdbc:oracle:thin:@servername:port:BDD_name"                          username="SAU_BIS"                          password="V0t0,z91!"                          maxActive="100000"                          maxIdle="100000"  />    </GlobalNamingResources>  

I added loglvel : org.apache.catalina.realm.level = FINE and I have this information:

13-Mar-2020 08:19:08.515 FINE [http-apr-8080-exec-4] org.apache.catalina.realm.RealmBase.findSecurityConstraints   Checking constraint 'SecurityConstraint[JMX Proxy interface]' against GET /status --> false  13-Mar-2020 08:19:08.515 FINE [http-apr-8080-exec-4] org.apache.catalina.realm.RealmBase.findSecurityConstraints   Checking constraint 'SecurityConstraint[Status interface]' against GET /status --> true  13-Mar-2020 08:19:08.515 FINE [http-apr-8080-exec-4] org.apache.catalina.realm.RealmBase.findSecurityConstraints   Checking constraint 'SecurityConstraint[HTML Manager interface (for humans)]' against GET /status --> false  13-Mar-2020 08:19:08.515 FINE [http-apr-8080-exec-4] org.apache.catalina.realm.RealmBase.findSecurityConstraints   Checking constraint 'SecurityConstraint[Text Manager interface (for scripts)]' against GET /status --> false  13-Mar-2020 08:19:08.515 FINE [http-apr-8080-exec-4] org.apache.catalina.realm.RealmBase.findSecurityConstraints   Checking constraint 'SecurityConstraint[JMX Proxy interface]' against GET /status --> false  13-Mar-2020 08:19:08.515 FINE [http-apr-8080-exec-4] org.apache.catalina.realm.RealmBase.findSecurityConstraints   Checking constraint 'SecurityConstraint[Status interface]' against GET /status --> true  13-Mar-2020 08:19:08.515 FINE [http-apr-8080-exec-4] org.apache.catalina.realm.RealmBase.findSecurityConstraints   Checking constraint 'SecurityConstraint[HTML Manager interface (for humans)]' against GET /status --> false  13-Mar-2020 08:19:08.515 FINE [http-apr-8080-exec-4] org.apache.catalina.realm.RealmBase.findSecurityConstraints   Checking constraint 'SecurityConstraint[Text Manager interface (for scripts)]' against GET /status --> false  13-Mar-2020 08:19:08.515 FINE [http-apr-8080-exec-4] org.apache.catalina.realm.RealmBase.hasUserDataPermission   User data constraint has no restrictions  13-Mar-2020 08:19:24.345 FINE [http-apr-8080-exec-5] org.apache.catalina.realm.RealmBase.findSecurityConstraints   Checking constraint 'SecurityConstraint[JMX Proxy interface]' against GET /status --> false  13-Mar-2020 08:19:24.345 FINE [http-apr-8080-exec-5] org.apache.catalina.realm.RealmBase.findSecurityConstraints   Checking constraint 'SecurityConstraint[Status interface]' against GET /status --> true  13-Mar-2020 08:19:24.345 FINE [http-apr-8080-exec-5] org.apache.catalina.realm.RealmBase.findSecurityConstraints   Checking constraint 'SecurityConstraint[HTML Manager interface (for humans)]' against GET /status --> false  13-Mar-2020 08:19:24.345 FINE [http-apr-8080-exec-5] org.apache.catalina.realm.RealmBase.findSecurityConstraints   Checking constraint 'SecurityConstraint[Text Manager interface (for scripts)]' against GET /status --> false  13-Mar-2020 08:19:24.345 FINE [http-apr-8080-exec-5] org.apache.catalina.realm.RealmBase.findSecurityConstraints   Checking constraint 'SecurityConstraint[JMX Proxy interface]' against GET /status --> false  13-Mar-2020 08:19:24.345 FINE [http-apr-8080-exec-5] org.apache.catalina.realm.RealmBase.findSecurityConstraints   Checking constraint 'SecurityConstraint[Status interface]' against GET /status --> true  13-Mar-2020 08:19:24.345 FINE [http-apr-8080-exec-5] org.apache.catalina.realm.RealmBase.findSecurityConstraints   Checking constraint 'SecurityConstraint[HTML Manager interface (for humans)]' against GET /status --> false  13-Mar-2020 08:19:24.345 FINE [http-apr-8080-exec-5] org.apache.catalina.realm.RealmBase.findSecurityConstraints   Checking constraint 'SecurityConstraint[Text Manager interface (for scripts)]' against GET /status --> false  13-Mar-2020 08:19:24.345 FINE [http-apr-8080-exec-5] org.apache.catalina.realm.RealmBase.hasUserDataPermission   User data constraint has no restrictions  13-Mar-2020 08:19:24.361 FINE [http-apr-8080-exec-5] org.apache.catalina.realm.CombinedRealm.authenticate Attempting to authenticate user "admin" with realm "org.apache.catalina.realm.UserDatabaseRealm"  13-Mar-2020 08:19:24.361 FINE [http-apr-8080-exec-5] org.apache.catalina.realm.CombinedRealm.authenticate Failed to authenticate user "admin" with realm "org.apache.catalina.realm.UserDatabaseRealm"  

But it's still impossible to access on console.

Any ideas ?

How to link up (to have same GAL and see all free/busy data but not combine) 2x On-prem Exchange Servers belonging to 2x different forest?

Posted: 22 May 2022 03:01 PM PDT

We are merging 2x companies @abc.com and @xyz.com. The networks are not interconnected at this stage. We want to achieve all staff seeing one single GAL and be able to book meetings between the 2x office.

Both companies plan to migrate to O365 as separate tenants. Does O365 federation allow us to achieve what I want (i.e. combined GAL)?

Before moving to O365, can forming inter-forest trust between the 2x active directory allow us to have a combined GAL?

Any suggestion will be greatly appreciated!

ssh -i Permission denied public key

Posted: 22 May 2022 01:01 PM PDT

I am trying to SSH as a user on host.

I've created ssh keys using ssh-keygen for user gitlab on the host and copied the public key id_rsa.pub to ~/.ssh/authorized_keys alongwith copying the private key id_rsa to my local machine.

When I try to ssh using the following command:

ssh -v -i id_rsa gitlab@67.205.XXX.XXX

I get the following permission denied error:

OpenSSH_7.9p1, LibreSSL 2.7.3  debug1: Reading configuration data /etc/ssh/ssh_config  debug1: /etc/ssh/ssh_config line 48: Applying options for *  debug1: Connecting to 67.205.XXX.XXX [67.205.XXX.XXX] port 22.  debug1: Connection established.  debug1: identity file id_rsa type -1  debug1: identity file id_rsa-cert type -1  debug1: Local version string SSH-2.0-OpenSSH_7.9  debug1: Remote protocol version 2.0, remote software version OpenSSH_7.2p2 Ubuntu-4ubuntu2.8  debug1: match: OpenSSH_7.2p2 Ubuntu-4ubuntu2.8 pat OpenSSH_7.0*,OpenSSH_7.1*,OpenSSH_7.2*,OpenSSH_7.3*,OpenSSH_7.4*,OpenSSH_7.5*,OpenSSH_7.6*,OpenSSH_7.7* compat 0x04000002  debug1: Authenticating to 67.205.XXX.XXX:22 as 'gitlab'  debug1: SSH2_MSG_KEXINIT sent  debug1: SSH2_MSG_KEXINIT received  debug1: kex: algorithm: curve25519-sha256@libssh.org  debug1: kex: host key algorithm: ecdsa-sha2-nistp256  debug1: kex: server->client cipher: chacha20-poly1305@openssh.com MAC: <implicit> compression: none  debug1: kex: client->server cipher: chacha20-poly1305@openssh.com MAC: <implicit> compression: none  debug1: expecting SSH2_MSG_KEX_ECDH_REPLY  debug1: Server host key: ecdsa-sha2-nistp256 SHA256:ERK9s4ZkT4m95jq0sejB38PNAMGaLdIQB98SNqWQDfg  debug1: Host '67.205.XXX.XXX' is known and matches the ECDSA host key.  debug1: Found key in /Users/angad/.ssh/known_hosts:46  debug1: rekey after 134217728 blocks  debug1: SSH2_MSG_NEWKEYS sent  debug1: expecting SSH2_MSG_NEWKEYS  debug1: SSH2_MSG_NEWKEYS received  debug1: rekey after 134217728 blocks  debug1: Will attempt key: /Users/angad/.ssh/id_rsa RSA SHA256:/xJTv6gRH+xBW9Q+SnwlkRVada4tMESKT+z1LT2zu18 agent  debug1: Will attempt key: id_rsa  explicit  debug1: SSH2_MSG_EXT_INFO received  debug1: kex_input_ext_info: server-sig-algs=<rsa-sha2-256,rsa-sha2-512>  debug1: SSH2_MSG_SERVICE_ACCEPT received  debug1: Authentications that can continue: publickey  debug1: Next authentication method: publickey  debug1: Offering public key: /Users/angad/.ssh/id_rsa RSA SHA256:/xJTv6gRH+xBW9Q+SnwlkRVada4tMESKT+z1LT2zu18 agent  debug1: Authentications that can continue: publickey  debug1: Trying private key: id_rsa  debug1: Authentications that can continue: publickey  debug1: No more authentication methods to try.  gitlab@67.205.XXX.XXX: Permission denied (publickey).  

Not sure how I can debug further.

Stuck on 'Enable delegation of user credentials' Windows Server Hyper-v 2012 r2

Posted: 22 May 2022 12:04 PM PDT

Im trying to setup a Windows Server Hyper-V 2012 r2 server which I've installed, but am having issues connecting to it throught the Hyper-V manager on Windows 10. I have followed various instructions about WinRM and CredSSP however I still have the following issue. I launch the hyper-v manager and click 'Connect to server'. I put in the ip address and set the username and password for the hyper-v server. When I click 'Ok' however, I get an error saying 'This computer is not configured to allow delegation of user credentials'. If I click 'Yes' to suposedly allow it to delegate the credentials, the message pops up again. If I click no I get another error saying 'Could not connect to the Virtual Machine Management' and 'A computer policy does not allow the delegations of the user credentials to the target computer'. I can access and control it fine with the Server Manager. Any thoughts?enter image description here

How to enable email relay in Zimbra in same domain, sent from O365

Posted: 22 May 2022 11:02 AM PDT

So i have to use a shared domain during migration from Zimbra to O365.

MX points to Zimbra, but is also configured to enable outgoin emails from O365 to the world in the same domain. I've configured a connector on O365 to the zimbra (it works), and created contacts for not-yet-migrated users.

The plan is, that during migration, the incoming mail comes to zimbra and is redirected to O365 to onmicrosoft domain, for migrated users.

When i send email from O365 to anywhere (except my domain) it works correctly. When I send to anyone within company, that is still on zimbra, I get either of the errors:

550 5.7.1 ... Relaying denied  553 5.7.1 : Sender address rejected: not logged in  

Zimbra clearly blocks my user, as it already exists within its server, but I don't know, where to start to unlock it? I've done similar things with other services before, but most didn't care about that.

Exchange 2013 - Remove warning text from outgoing email body

Posted: 22 May 2022 11:02 AM PDT

I created a transport-rule in our Exchange server 2013 where it will add a warning text on top of email-body to all external incoming emails. This is to alert employees about potential risks in external emails when it has website-links and attachments which may be harmful. The text is as follows: Text

CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you recognize the sender and know the content is safe.  

Now, when user will reply to the email, I want it to be remove when Exchange process it to send. How can I remove the warning text from outgoing emails in Exchange? I was looking for something in rules, but there is none I could find.

Any help will be appreciated. Thanks.

How to configure bootpd on Mac OS X El Capitan (10.11.x)

Posted: 22 May 2022 11:39 AM PDT

I need to run bootpd on El Capitan and configure it to use a different gateway and modify the pool range by altering file /etc/bootpd.plist

However on El Capitan it seems that bootpd while present is essentially disabled and the bootpd.plist file isn't in /etc or anywhere.

How do I get going with bootpd on 10.11.x ?

Trouble installing ZSH

Posted: 22 May 2022 12:03 PM PDT

I did the following:

yum install zsh   

Then

chsh eduar    New shell [/bin/bash]: /bin/zsh    

When I type:

curl -L http://install.ohmyz.sh | sh     

I got this:

You already have Oh My Zsh installed.  You'll need to  remove /home/eduar/.oh-my-zsh if you want to install    

It says that I already have installed the module.
Then, our last step is to reload our resources file:

source ~/.zshrc    

Here I have the following issue:

bash: /home/eduar/.oh-my-zsh/oh-my-zsh.sh: line 26: syntax error near unexpected token `('    bash: /home/eduar/.oh-my-zsh/oh-my-zsh.sh: line 26: `for config_file ($ZSH/lib/*.zsh); do'    

If y restart the terminal, it seems like ZSH is not working.

Backup strategy for millions of files in lots of directories

Posted: 22 May 2022 02:05 PM PDT

We have millions of files in lots of directories, for example:

\00\00\00\00.txt  \00\00\00\01.pdf  \00\00\00\02.html  ... so on  \05\55\12\31.txt  

backing up these to tape is slow as backing up data in this format is much slower than backing up a single large file.

The total number of files on a disk and the relative size of each file impacts backup performance. Fastest backups occur when the disk contains fewer large size files. Slowest backups occur when the disk contains thousands of small files. Backup Exec Admin Guide.

Would the backup performance significantly increase by creating a virtual hard drive, hosting the data on it once mounted then backing up the vhd instead?

I'm unsure if the underlying data within the vhd would affect this.

what are the drawbacks to this method?

proftpd does not support upload of file bigger than 2Kb

Posted: 22 May 2022 02:05 PM PDT

Just installed Webmin on Ubuntu Server with proftpd and now I can connect to the server and login with my ftp user name and password, but when I try to transfer a file, small files get uploaded easily, but larger files get stuck and timed out. Even a 200 KB image doesn't upload.

Here my configuration file for proftpd:

#  # /etc/proftpd/proftpd.conf -- This is a basic ProFTPD configuration file.  # To really apply changes, reload proftpd after modifications, if  # it runs in daemon mode. It is not required in inetd/xinetd mode.  #     # Includes DSO modules  Include /etc/proftpd/modules.conf    # Set off to disable IPv6 support which is annoying on IPv4 only boxes.  UseIPv6             on  # If set on you can experience a longer connection delay in many cases.  IdentLookups            off    ServerName          "Debian"  ServerType          standalone  DeferWelcome            off    MultilineRFC2228        on  DefaultServer           on  ShowSymlinks            on    TimeoutNoTransfer       600  TimeoutStalled          600  TimeoutIdle         1200    DisplayLogin                    welcome.msg  DisplayChdir                .message true  ListOptions                 "-l"    DenyFilter          \*.*/    # Use this to jail all users in their homes   # DefaultRoot           ~    # Users require a valid shell listed in /etc/shells to login.  # Use this directive to release that constrain.  # RequireValidShell     off    # Port 21 is the standard FTP port.  Port                21    # In some cases you have to specify passive ports range to by-pass  # firewall limitations. Ephemeral ports can be used for that, but  # feel free to use a more narrow range.  # PassivePorts                  49152 65534    # If your host was NATted, this option is useful in order to  # allow passive tranfers to work. You have to use your public  # address and opening the passive ports used on your firewall as well.  # MasqueradeAddress     1.2.3.4    # This is useful for masquerading address with dynamic IPs:  # refresh any configured MasqueradeAddress directives every 8 hours  <IfModule mod_dynmasq.c>  # DynMasqRefresh 28800  </IfModule>    # To prevent DoS attacks, set the maximum number of child processes  # to 30.  If you need to allow more than 30 concurrent connections  # at once, simply increase this value.  Note that this ONLY works  # in standalone mode, in inetd mode you should use an inetd server  # that allows you to limit maximum number of processes per service  # (such as xinetd)  MaxInstances            30    # Set the user and group that the server normally runs at.  User                proftpd  Group               nogroup    # Umask 022 is a good standard umask to prevent new files and dirs  # (second parm) from being group and world writable.  Umask               022  022  # Normally, we want files to be overwriteable.  AllowOverwrite          on    # Uncomment this if you are using NIS or LDAP via NSS to retrieve passwords:  # PersistentPasswd      off    # This is required to use both PAM-based authentication and local passwords  # AuthOrder         mod_auth_pam.c* mod_auth_unix.c    # Be warned: use of this directive impacts CPU average load!  # Uncomment this if you like to see progress and transfer rate with ftpwho  # in downloads. That is not needed for uploads rates.  #  # UseSendFile           off    TransferLog /var/log/proftpd/xferlog  SystemLog   /var/log/proftpd/proftpd.log    # Logging onto /var/log/lastlog is enabled but set to off by default  #UseLastlog on    # In order to keep log file dates consistent after chroot, use timezone info  # from /etc/localtime.  If this is not set, and proftpd is configured to  # chroot (e.g. DefaultRoot or <Anonymous>), it will use the non-daylight  # savings timezone regardless of whether DST is in effect.  #SetEnv TZ :/etc/localtime    <IfModule mod_quotatab.c>  QuotaEngine off  </IfModule>    <IfModule mod_ratio.c>  Ratios off  </IfModule>      # Delay engine reduces impact of the so-called Timing Attack described in  # http://www.securityfocus.com/bid/11430/discuss  # It is on by default.   <IfModule mod_delay.c>  DelayEngine on  </IfModule>    <IfModule mod_ctrls.c>  ControlsEngine        off  ControlsMaxClients    2  ControlsLog           /var/log/proftpd/controls.log  ControlsInterval      5  ControlsSocket        /var/run/proftpd/proftpd.sock  </IfModule>    <IfModule mod_ctrls_admin.c>  AdminControlsEngine off  </IfModule>    #  # Alternative authentication frameworks  #  #Include /etc/proftpd/ldap.conf  #Include /etc/proftpd/sql.conf    #  # This is used for FTPS connections  #  #Include /etc/proftpd/tls.conf    #  # Useful to keep VirtualHost/VirtualRoot directives separated  #  #Include /etc/proftpd/virtuals.con    # A basic anonymous configuration, no upload directories.    # <Anonymous ~ftp>  #   User                ftp  #   Group               nogroup  #   # We want clients to be able to login with "anonymous" as well as "ftp"  #   UserAlias           anonymous ftp  #   # Cosmetic changes, all files belongs to ftp user  #   DirFakeUser on ftp  #   DirFakeGroup on ftp  #   #   RequireValidShell       off  #   #   # Limit the maximum number of anonymous logins  #   MaxClients          10  #   #   # We want 'welcome.msg' displayed at login, and '.message' displayed  #   # in each newly chdired directory.  #   DisplayLogin            welcome.msg  #   DisplayChdir        .message  #   #   # Limit WRITE everywhere in the anonymous chroot  #   <Directory *>  #     <Limit WRITE>  #       DenyAll  #     </Limit>  #   </Directory>  #   #   # Uncomment this if you're brave.  #   # <Directory incoming>  #   #   # Umask 022 is a good standard umask to prevent new files and dirs  #   #   # (second parm) from being group and world writable.  #   #   Umask               022  022  #   #            <Limit READ WRITE>  #   #            DenyAll  #   #            </Limit>  #   #            <Limit STOR>  #   #            AllowAll  #   #            </Limit>  #   # </Directory>  #   # </Anonymous>    # Include other custom configuration files  Include /etc/proftpd/conf.d/  

Here is the log file content:

    Aug 01 12:58:43 liXXX-XXX proftpd[6039] liXXX-XXX.members.linode.com (IP[IP]): FTP session opened.      ... USER MYUSER: Login successful.      ... notice: user MYUSER: aborting transfer: Link to file server lost      ... FTP session closed.      ...: FTP session opened.     ...: USER MYUSER: Login successful.      ... notice: user MYUSER: aborting transfer: Link to file server lost      ...: FTP session closed.      ...: FTP session opened.      ...  

PAM LDAP configuration for non-local user authentication

Posted: 22 May 2022 03:01 PM PDT

I have a requirement to allow non-local user accounts to be logged in via LDAP authentication. Meaning, the user that is trying to login is allowed access, if the user account exists in LDAP server database, there is no need to have local user.

I'm able to achieve this if I run NSLCD(/usr/sbin/nslcd).

Would like to know if we can do this with any configuration in /etc/pam.d/sshd or /etc/pam_ldap.conf without the use of running NSLCD.

Please let me know your suggestions

Thanks, Sravani

Can Windows read an unpartitioned NTFS volume? (Single large partition)

Posted: 22 May 2022 12:04 PM PDT

So, for various reasons, I've ended up with a 45TB Single Linux Logical volume, without a partition table, formatted as NTFS containing 28TB of data (the Filesystem itself is 28TB).

The filesystem was created in Linux, and is mountable by Linux. The problem comes when I try and mount this within a KVM-based Windows VM on the same box. Windows does not see a 28TB filesystem, but a 1.8TB disk containing a few randomly sized unhelpful partitions.

Disk Management output showing Disk1 with randomly sized partitions

I presume this is because Windows is trying to read the first few bytes of the real NTFS filesystem data as a partition table.

I can see a few possible solutions to this problem, but can't work out how to actually execute any of them:

  • Have Windows read an unpartitioned disk (single volume) as a Filesystem?
  • Generate a partition table somehow on this Logical Volume without destroying the data that's held within the filesystem itself?
  • Somehow fake a partition table, pointing at the LVM volume and export this to the KVM guest (running in libvirt)

The current "partition table" as reported by parted is:

Model: Linux device-mapper (linear) (dm)  Disk /dev/mapper/chandos--dh-data: 48.0TB  Sector size (logical/physical): 512B/512B  Partition Table: loop    Number  Start  End     Size    File system  Flags   1      0.00B  48.0TB  48.0TB  ntfs  

Squid ignoring hosts file (using SquidMan on Mac)

Posted: 22 May 2022 01:01 PM PDT

I installed SquidMan 3.1 on my Mac, and it works fine. But I really need it to redirect some of the traffic using my hosts file, and it seems to be ignoring it no matter where I put it.

So far I've tried:

  • Adding a hosts_file /etc/hosts directive on the configuration template via the SquidMan UI.
  • Adding the same directive on the configuration file located at /usr/local/squid/etc/squid.conf
  • Creating a copy of my hosts file in /usr/local/squid/etc/ and updating the directives to match the new location.

What am I doing wrong?

No comments:

Post a Comment