Saturday, September 4, 2021

Recent Questions - Server Fault

Recent Questions - Server Fault


postfix/qmgr: warning: qmgr_active_done_3_generic: remove BCD2761F9C from active: No such file ...rectory

Posted: 04 Sep 2021 07:51 PM PDT

I have a CentOS 7 with postfix and dovecot and dovecot-mysql installed. Receiving the errors shown:

Sep 05 02:17:32 example.com postfix/qmgr[22004]: warning: qmgr_active_done_3_generic: remove BCD2761F9C from active: No such file ...rectory

I do not understand the message warning: No such file or directory..
Is this related with PHPMailer; since I am using PHPMailer to send emails?

How to create and execute jenkinsfile when Jenkins and ansible are in separate containers

Posted: 04 Sep 2021 04:28 PM PDT

  • I'm new to Jenkins...
  • My goal is to create a Jenkins file for activate group of ansible-playbook - which will create a long installation of my tested product.
  • I manage my Ansible files locally with VSCode and push them to Git repository.
  • I also have an empty directory for Jenkins files.
  • I build 2 containers in Docker desktop (Windows): Ansible and Jenkins.
  • Ansible and Jenkins are on the same Network Id.
  • Ping command is working from Ansible to Jenkins and vis versa.
  • Both containers have access to Internet.
  • My Ansible container has an access to my Ansible directory with its docker-compose file, like this:
    ...      volumes:        - ~/product/ansible:/ansible      ...  
  • My Jenkins container has no access to the Jenkins directory.
  • I installed Ansible plugin in Jenkins container through Jenkins site > Manage Jenkins > Manage Plugins.
  • I go to Create a Job > Freestyle project - under steps section, I filled name, paths for Ansible-playbook and inventories and that's all.
  • Of course - this pipeline wasn't worked at all. Not to mention that no file was created in my Jenkins folder.

So, what are the missing parts in my configuration - in order it to be work?

What could cause blocked requests in IIS?

Posted: 04 Sep 2021 04:12 PM PDT

I'm using IIS on Windows Server 2016 with MySQL and PHP on two almost identical servers. I've recently noticed a slowdown on one of my two servers but it happens only when my site tries to execute multiple instances of a script at the same time. They seem to get stuck on each other.

A perfect example is my search page. When the user types in a search query, with each keyup (after the second letter) a search is executed as long as there's at least a 200 ms delay since the last keypress. So if you type fast, it only does one search at the end but for slower typers (those who wait more than 200 ms between key presses) this will trigger multiple calls to the search results. See this screenshot.

BAD SERVER enter image description here

Notice all the pending requests and in this screenshot the first one just finished at 19.08 seconds. Obviously much too long. By the time they're all done they're all well over 15 seconds to return a simple resultset.

BAD SERVER enter image description here

Keep in mind that these queries take just a fraction of a second when run in MySQL Workbench and also when run on my other server which is not suffering from this problem. See in this screenshot (from the good server) the exact same search returns in a quarter of a second.

GOOD SERVER enter image description here

It seems to me that (on the bad server) they're not able to execute simultaneously for some reason because if I execute just a single search (by typing quickly enough to trigger only a single search) it comes back quick, but if I execute multiples like this, they all get stuck like in a traffic jam. What could cause this?

This next screenshot shows the result if I trigger only a single search on the bad server. As you can see it comes back super fast. So the problem is only when executing multiples of the same script simultaneously.

BAD SERVER enter image description here

I did make some changes to the bad server recently but as far as I can remember, the only changes I made were to allow bigger file uploads.

  • In PHP I increased post_max_size = 500M
  • In PHP I increased upload_max_filesize = 500M
  • In IIS I increased UploadReadAheadSize to 49152000
  • In IIS I increased maximum allowed content length to 300000000

It's possible that I made other changes to this server that I can't remember.

TEMPORARY FIX

I can mitigate this problem by allowing a longer delay between key presses when searching, and I've done this, increasing it to 800 ms so even slow typers don't see this problem, but this is only a band aid solution and does not address the underlying issue which also affects other areas of my site.

WHAT I'VE TRIED

So far I've confirmed that my IIS config, MySQL config (my.ini) and my PHP config (php.ini) are all identical in every way that matters on both servers (at least as far as what seems obvious to me). I've also confirmed that the select statements I'm running in this search perform equally well on both servers if I execute them in MySQL Workbench. It's only in my web app where I'm having this problem.

I temporarily undid the two changes I made to IIS for larger file uploads just in case, but that seemed to make no difference.

I've also downloaded and installed LeanSentry which is warning me once or twice a day that my site has seen blocked requests, which I assume is exactly what I'm seeing here, but unfortunately LeanSentry can only pinpoint the source of the problem with ASP pages, not PHP. So it essentially only confirms for me that there's a problem but it can't help me beyond that.

OTHER SYMPTOMS

I see similar problems if I open multiple reports simultaneously. If I allow one report to finish loading before opening the next one they all load quickly, but if I force my app to open multiple reports at once, they all get stuck.

What could be causing this issue of bottlenecking?

Address to be used to access private ipaddress for AZURE resource from on-premise

Posted: 04 Sep 2021 02:43 PM PDT

I can see how we can make an AZURE database, say COSMOS DB, a private IP address with Private link, that is fine.

If you want to access that private link / IP address database from on-premise via site2site vpn, with a tool like Spotfire or Tableau, how do we specify the connection string that goes via the ExpressRoute or Site2SiteVPN? I cannot find any examples on that and how that occurs.

It must be basic, but I cannot see it.

Update

Looking at this: https://docs.microsoft.com/en-us/azure/private-link/private-endpoint-dns#on-premises-workloads-using-a-dns-forwarder

enter image description here

I get the impression that the name "azs1l1.database.windows.net" would be the connection string I need?

Unable to ssh using ProxyJump but it works with ssh -J

Posted: 04 Sep 2021 09:06 PM PDT

My question is: How do I set up a bastion host for ssh on AWS using an ubuntu instance?

I can do the following with success:

root@e183d80cdabc# ssh -J ubuntu@63.33.206.201 ubuntu@10.240.0.20  Last login: Sat Sep  4 13:14:17 2021 from 10.240.0.30  ==> SUCCESS! ==> ubuntu@ip-10-240-0-20:~$  

But it fails when I try the ~/.ssh/config file approach. Commands used:

# ssh 10.240.0.20  # ssh ubuntu@10.240.0.20  # ssh -i ~/.ssh/id_rsa ubuntu@10.240.0.20    ssh: connect to host 10.240.0.20 port 22: Connection refused  

My ~/.ssh/config looks like this:

root@e183d80cdabc# cat $HOME/.ssh/config  Host bastion    HostName 54.170.186.144  Host remote    HostName 10.240.0.20    ProxyJump bastion  

I am running ubuntu on AWS as follows:

ubuntu@ip-10-240-0-30:~$ cat /etc/os-release  NAME="Ubuntu"  VERSION="20.04.2 LTS (Focal Fossa)"  ID=ubuntu  ID_LIKE=debian  PRETTY_NAME="Ubuntu 20.04.2 LTS"  VERSION_ID="20.04"  HOME_URL="https://www.ubuntu.com/"  SUPPORT_URL="https://help.ubuntu.com/"  BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"  PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"  VERSION_CODENAME=focal  UBUNTU_CODENAME=focal  

I have tried adding the User ubuntu field but this does not help.

My /etc/ssh/ssh_config on the server looks like this:

Host *      ForwardX11Trusted yes      IdentityFile ~/.ssh/id_rsa      Port 22      SendEnv LANG LC_*      HashKnownHosts yes      GSSAPIAuthentication yes  

UPDATE I am now using the verbose option i.e.

root@e183d80cdabc# ssh -vvv 10.240.0.20  OpenSSH_8.2p1 Ubuntu-4ubuntu0.3, OpenSSL 1.1.1f  31 Mar 2020  debug1: Reading configuration data /root/.ssh/config  debug1: /root/.ssh/config line 2: Applying options for *  debug1: Reading configuration data /etc/ssh/ssh_config  debug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files  debug1: /etc/ssh/ssh_config line 21: Applying options for *  debug2: resolve_canonicalize: hostname 10.240.0.20 is address  debug2: ssh_connect_direct  debug1: Connecting to 10.240.0.20 [10.240.0.20] port 22.  debug1: connect to address 10.240.0.20 port 22: Connection refused  ssh: connect to host 10.240.0.20 port 22: Connection refused  

It appears not to be using any jump host (i.e. it skips the bastion) and is going directly, and FAILS.

Any ideas greatly appreciated! Thank You

=========================================================

UPDATE: 2021-09-04-15-44 - with SOLUTION Thanks all, I have marked as answer, below.

The correct config does not use HostName, as the matching is done on Host. I was also able to include a wildcard on the ip address, which is what I was really after.

ssh config

root@e183d80cdabc# cat $HOME/.ssh/config  Host bastion    HostName 63.33.206.201    User ubuntu  Host 10.240.0.*    ProxyJump bastion    User ubuntu  

And voila!

# ssh 10.240.0.20  ...  ubuntu@ip-10-240-0-20:~$  

Does apache open and close every log on every access?

Posted: 04 Sep 2021 07:36 PM PDT

The question is about the access and error logs, particularly with multiple hosts (apache instances installed on more than one server) and keeping the logs centrally on a network file system.

Does apache close each log file after every write?

If yes, on a busy server hosting many sites each with it's own log, that would seem to be a potential performance bottleneck?

If No, what is the solution when having multiple servers writing to a single logging location on a network file system?

Mutual authentication (WCF client connecting to SOAP service) fails with one client cert but works with another (but both trusted on server side)

Posted: 04 Sep 2021 02:34 PM PDT

Setup: a .NET (4.6) client application connects to a remote SOAP service over HTTPS. The remote service can be configured to require a a client certificate or not.

What I am looking as an answer is any possible explanation of why scenario #2 fails ... the following 3 scenarios were all tested using exactly the same code base, only changing the certificates involved and whether or not a client certificate was required by the service.

Scenario #1 - no client certificate required

  • client connects OK

Scenario #2 - client certificate required, certificate A used

  • certificate A is installed in Windows on client side (local computer store)
  • certificate is valid, 2048 bits, non-wildcard, used successfully for server authentication in another unrelated service, issued by GoDaddy Secure Certificate Authority - G2
  • certificate is shared with the remote party who seem to know what they are doing
  • when client attempts request, handshake fails. On the client side the .NET exception is "The request was aborted: Could not create SSL/TLS secure channel.". On the server side the error is "client failed to present a certificate".

Scenario #3 - client certificate required, certificate B used

  • everything is exactly the same as #2 except a different client certificate is used (B)
  • certificate is valid, 2048 bits, wildcard, used successfully for server authentication in another unrelated service, issued by GeoTrust RSA CA 2018
  • client connects OK

What we can see from logs is that in both scenario #2 and #3, the client and server negotiate to use TLS 1.2.

After running the above multiple times, checking everything, my only conclusion is that certificate A is somehow not compatible with the setup - either the .NET client decides not to present it, or the service cannot accept it. But what could possibly be different/missing?

iRedMail: Domain alias not working with some external mails (diacritics/punycode)

Posted: 04 Sep 2021 07:44 PM PDT

After successfully setting up an iRedMail server for my main domain, I tried to add my secondary domain as an alias by following the steps on here: https://docs.iredmail.org/sql.add.alias.domain.html

This didn't do the trick just yet, so I additionally added the secondary domain into the /etc/postfix/main.cf:

virtual_alias_domains = domain2.tld  virtual_alias_maps = hash:/etc/postfix/virtual  

Note: I didn't remove any of the existing mysql entries under virtual_alias_maps.

And entered the mapping into /etc/postfix/virtual and executed "postmap /etc/postfix/virtual" afterwards:

@domain2.tld     @domain1.tld  

This is working internally on the server. user1@domain1.tld can send to user2@domain2.tld and user2 will receive the mail in his mailbox. External emails also still arrive when sent to user@domain1.tld.

Unfortunately it doesn't function with external mails to the secondary domain. In my /var/logs/mail.log I find the following lines:

postfix/smtpd[5541]: NOQUEUE: reject: RCPT from mail-oi1-x231.google.com[2607:f8b0:4864:20::231]: 451 4.3.5 <user1@domain2.tld>: Recipient address rejected: Server configuration problem; from=<username@gmail.com> to=<user1@domain2.tld> proto=ESMTP helo=<mail-oi1-x231.google.com>  

And:

postfix/smtpd[5644]: warning: problem talking to server 127.0.0.1:12340: Connection timed out  

On port 12340 dovecot is listening:

dovecot    513      root   67u  IPv4  17087      0t0  TCP 127.0.0.1:12340 (LISTEN)  

In my dovecot log I find the following line repeatedly:

dovecot: quota-status: Error: quota-status: Client sent invalid recipient address: Invalid character in path  

After some further testing with different external mail hosters, I realized that 2 out of 4 mails arrived when sent to the secondary domain. GMail and Hotmail didn't, my company's exchange and some other web provider came through.

And that's where I'm stuck. I suspect one of two things: Either I simply missed a necessary configuration, which seems highly likely, since I've never set up a mail server on Debian before, or the dovecot error is caused by my secondary domain. The secondary domain contains an umlaut (ä/ö/ü), which I'm well aware can cause some issues. Therefore I also own the domain in it's punycode formatted variant. So, whenever I added my secondary domain with it's umlaut to a configuration, I also added the punnycode version of it, assuming it would solve any issues in that regard.

iRedMail/postfix/dovecot/whateverelseisinvolved seem to be working fine with punnycode/umlauts per se, it just seems to depend on the sender, since only half the mails go lost (sender won't get an error). Any guess as to why or what logs I could check to dig deeper into this? Did I simply miss to configure something obvious?

Any push into the right direction is highly appreciated.

Regards, Snot

==== Basic Info ====

  • iRedMail version: 1.4.0 MARIADB edition
  • Linux/BSD distribution name and version: Debian GNU/Linux 10 (buster) - 10.10
  • Used DB: MySQL (MariaDB)
  • Web server: Nginx

==== Edit ====

As far as the base setup; After a clean Debian 10 installation I've followed the steps in this guide https://www.linuxbabe.com/mail-server/debian-10-buster-iredmail-email-server

Any specific config that alters from the guide has been mentioned in the post. I've additionally issued a certificate which includes the main domain and the secondary domain in punnycode.

Here the various logs on boot:

/var/log/mail.log:

Aug 14 14:24:36 s postfix/postfix-script[1637]: warning: symlink leaves directory: /etc/postfix/./makedefs.out  Aug 14 14:24:37 s amavis[573]: starting. /usr/sbin/amavisd-new at host.domain1.tld amavisd-new-2.11.0 (20160426), Unicode aware, LC_ALL="C", LANG="en_US.UTF-8"  Aug 14 14:24:37 s postfix/postfix-script[1819]: starting the Postfix mail system  Aug 14 14:24:37 s postfix/master[1821]: daemon started -- version 3.4.14, configuration /etc/postfix  Aug 14 14:24:39 s amavis[1915]: Net::Server: Group Not Defined.  Defaulting to EGID '121 121'  Aug 14 14:24:39 s amavis[1915]: Net::Server: User Not Defined.  Defaulting to EUID '113'  Aug 14 14:24:39 s amavis[1915]: No ext program for   .F, tried: unfreeze, freeze -d, melt, fcat  Aug 14 14:24:39 s amavis[1915]: No ext program for   .zoo, tried: zoo, unzoo  Aug 14 14:24:39 s amavis[1915]: No decoder for       .F     Aug 14 14:24:39 s amavis[1915]: No decoder for       .zoo   Aug 14 14:24:39 s amavis[1915]: Using primary internal av scanner code for clamav-socket  Aug 14 14:24:39 s amavis[1915]: Found secondary av scanner clamav-clamscan at /usr/bin/clamscan  

/var/log/dovecot/dovecot.log:

Aug 14 14:24:26 s dovecot: master: Dovecot v2.3.4.1 (f79e8e7e4) starting up for pop3, imap, sieve, lmtp (core dumps disabled)  Aug 14 14:24:43 s dovecot: stats: Error: (stats-reader): didn't reply with a valid VERSION line: EXPORT#011global  Aug 14 14:24:43 s dovecot: stats: Error: (stats-reader): didn't reply with a valid VERSION line: EXPORT#011global  

grep postfix /var/log/syslog:

Aug 14 14:24:36 s postfix/postfix-script[1637]: warning: symlink leaves directory: /etc/postfix/./makedefs.out  Aug 14 14:24:37 s postfix/postfix-script[1819]: starting the Postfix mail system  Aug 14 14:24:37 s postfix/master[1821]: daemon started -- version 3.4.14, configuration /etc/postfix  

I've disabled thequota features and enabled SMTPUTF8 in my postfix main.cf, no notable change except from an additional line on boot in the mail.log:

Aug 14 14:59:46 s amavis[571]: starting. /usr/sbin/amavisd-new at host.domain1.tld amavisd-new-2.11.0 (20160426), Unicode aware, LC_ALL="C", LANG="en_US.UTF-8"  

The behaviour is still the same unfortunately. After further analyzing the logs I realized that it seems as if the mails from the providers that come through get sent via punycode (even if I specifically sent it to the domain with the umlaut/non-ASCII-char). GMail on the other hand actually sends the mail to the domain that contains the umlaut (Non-punycode, even if I specifically use the punycode format in the recipient mail adress). So, I'll either need to teach my server to handle the non-ASCII-chars or I need to teach Google to send via punycode. Or teach my server to translate umlauts to punycode. Option 2 is obviously not really on option, so 1 or 3 it is.

mail.log entry from non-GMail hoster mail:

postfix/amavis/smtp[2300]: 4Gn0zh0z4FzLnSJ: to=<user@domain1.tld>, orig_to=<user@domain2InPunycode.tld>, relay=127.0.0.1[127.0.0.1]:10024, delay=4, delays=0.1/0/0.01/3.9, dsn=2.0.0, status=sent (250 2.0.0 from MTA(smtp:[127.0.0.1]:10025): 250 2.0.0 Ok: queued as 4Gn0zm04JHzLxc0)  

mail.log entry from GMail mail:

Aug 14 15:06:44 s postfix/smtpd[2281]: warning: problem talking to server 127.0.0.1:12340: Connection timed out  Aug 14 15:06:44 s postfix/smtpd[2281]: NOQUEUE: reject: RCPT from mail-ot1-x32b.google.com[2607:f8b0:4864:20::32b]: 451 4.3.5 <user@dömain2.tld>: Recipient address rejected: Server configuration problem; from=<gmailuser@gmail.com> to=<user@dömain2.tld> proto=ESMTP helo=<mail-ot1-x32b.google.com>  

Issue with Sieve Filters on postfix?

Posted: 04 Sep 2021 08:21 PM PDT

I was wondering if someone could shed some light on the issue im having, Currently i have a simple postfix server and in front it has a PMG gateway. Because PMG gateway has the spam filters i need to redirect the spam to go to the users junk folder. I have already accomplished this zimbra but on postfix i think im missing something. These were the steps i took

  1. install the package and Modify adding this at the bottom of main.cf
sudo apt-get install dovecot-sieve dovecot-managesieved        mailbox_command=/usr/lib/dovecot/deliver  
  1. then edit

    /etc/dovecot/conf.d/90-sieve.conf  

and added this line to configure the default location

sieve_default = /etc/dovecot/default.sieve  

then make dovecot user to read the file

chgrp dovecot /etc/dovecot/conf.d/90-sieve.conf  
  1. go the plugin of lda and uncomment

    /etc/dovecot/conf.d/15-lda.conf  mail_plugins = sieve  
  2. create file sieve and compile it

         root@mail:/etc/dovecot# cat /etc/dovecot/default.sieve        require "fileinto";        #Filter email based on a subject        if header :contains "X-Spam-Flag" "YES" {        fileinto "Junk";       }  

then

cd /etc/dovecot    sievec default.sieve  

and give dovecot the permissions

chgrp dovecot /etc/dovecot/default.svbin  
  1. restart postfix and dovecot

i send a test spam email from test@gmail.com

and its marking the xspam flag to yes but it keeps going to inbox instead of Junk folder

i checked the protocols

root@mail:/etc/dovecot# doveconf | grep protocols  protocols = " imap sieve pop3"  ssl_protocols = !SSLv2 !SSLv3  
Return-Path: <test@gmail.com>  X-Original-To: sistemas@mydomain.com  Delivered-To: sistemas@mydomain.com  Received: from mail.mydomain.com (unknown [192.168.1.248])      (using TLSv1.2 with cipher ADH-AES256-GCM-SHA384 (256/256 bits))      (No client certificate requested)      by mail.mydomain.com (Postfix) with ESMTPS id CB3162033C      for <sistemas@mydomain.com>; Sun, 25 Jul 2021 10:54:03 -0500 (COT)  Received: from mail.mydomain.com (localhost.localdomain [127.0.0.1])      by mail.mydomain.com (Proxmox) with ESMTP id 3DC215C2F3E      for <sistemas@mydomain.com>; Sun, 25 Jul 2021 10:48:19 -0500 (-05)  Received-SPF: softfail (gmail.com ... _spf.google.com: Sender is not authorized by default to use 'test@gmail.com' in 'mfrom' identity, however domain is not currently prepared for false failures (mechanism '~all' matched)) receiver=mail.mydomain.com; identity=mailfrom; envelope-from="test@gmail.com"; helo=emkei.cz; client-ip=101.99.94.155  Authentication-Results: mail.mydomain.com; dmarc=fail (p=none dis=none) header.from=gmail.com  Authentication-Results: mail.mydomain.com; dkim=none; dkim-atps=neutral  Received: from emkei.cz (emkei.cz [101.99.94.155])      (using TLSv1.2 with cipher ADH-AES256-GCM-SHA384 (256/256 bits))      (No client certificate requested)      by mail.mydomain.com (Proxmox) with ESMTPS id 6003D5C0F66      for <sistemas@mydomain.com>; Sun, 25 Jul 2021 10:48:16 -0500 (-05)  Received: by emkei.cz (Postfix, from userid 33)      id B52D62413E; Sun, 25 Jul 2021 17:48:13 +0200 (CEST)  To: sistemas@mydomain.com  subject: SPAM: test  From: "test" <test@gmail.com>  X-Priority: 3 (Normal)  Importance: Normal  Errors-To: test@gmail.com  Reply-To: test@gmail.com  Content-Type: text/plain; charset=utf-8  Message-Id: <20210725154813.B52D62413E@emkei.cz>  Date: Sun, 25 Jul 2021 17:48:13 +0200 (CEST)  X-SPAM-LEVEL: Spam detection results:  6      BAYES_50                  0.8 Bayes spam probability is 40 to 60%      DKIM_ADSP_CUSTOM_MED    0.001 No valid author signature, adsp_override is CUSTOM_MED      FORGED_GMAIL_RCVD           1 'From' gmail.com does not match 'Received' headers      FREEMAIL_FROM           0.001 Sender email is commonly abused enduser mail provider (vhfgyut[at]hotmail.com) (test[at]gmail.com) (test[at]gmail.com) (test[at]gmail.com) (test[at]gmail.com) (test[at]gmail.com)      NML_ADSP_CUSTOM_MED       0.9 ADSP custom_med hit, and not from a mailing list      SPF_HELO_PASS          -0.001 SPF: HELO matches SPF record      SPF_SOFTFAIL            0.665 SPF: sender does not match SPF record (softfail)      SPOOFED_FREEMAIL        1.224 -      SPOOF_GMAIL_MID         1.498 From Gmail but it doesn't seem to be...  X-Spam-Flag: Yes    test  

nginx config using variable in ssl_certificate path throws permissions error

Posted: 04 Sep 2021 07:02 PM PDT

The nginx configuration server block:

localhost:/etc/nginx$ cat nginx.conf | grep -B 3 -A 6 '$ssl_server_name'    server {      listen 443 ssl http2 default_server;        ssl_certificate         /etc/letsencrypt/live/$ssl_server_name/fullchain.pem;      ssl_certificate_key     /etc/letsencrypt/live/$ssl_server_name/privkey.pem;        location / {        include /etc/nginx/snippets/set-headers.conf;        proxy_pass http://localhost:8080;      }    }  

This is using the variable $ssl_server_name in the certificate directives which is supported since nginx 1.15.9. Relevant part of the nginx docs.

The configuration passes nginx -t and loads without issues, but page does not load in browser, and there is a permissions denied error opening the cert in error.log even though nginx is running as root:

localhost:/etc/nginx$ sudo tail -n 1 /var/log/nginx/error.log  2019/06/19 18:51:47 [error] 5676#5676: *251 cannot load certificate "/etc/letsencrypt/live/[DOMAIN NAME REDACTED]/fullchain.pem": BIO_new_file() failed (SSL: error:0200100D:system library:fopen:Permission denied:fopen('/etc/letsencrypt/live/[DOMAIN NAME REDACTED]/fullchain.pem','r') error:2006D002:BIO routines:BIO_new_file:system lib) while SSL handshaking, client: [IP ADDRESS REDACTED], server: 0.0.0.0:443  localhost:/etc/nginx$ ps -ef | grep nginx | grep -v grep  www-data  5676 24653  0 18:49 ?        00:00:00 nginx: worker process  root     24653     1  0 15:08 ?        00:00:00 nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf  localhost:/etc/nginx$ sudo ls -l /etc/letsencrypt/live/[DOMAIN NAME REDACTED]/fullchain.pem  lrwxrwxrwx 1 root root 56 Apr 17 18:53 /etc/letsencrypt/live/[DOMAIN NAME REDACTED]/fullchain.pem -> ../../archive/[DOMAIN NAME REDACTED]/fullchain1.pem  localhost:/etc/nginx$ sudo ls -l /etc/letsencrypt/archive/[DOMAIN NAME REDACTED]/fullchain1.pem  -rw-r--r-- 1 root root 3591 Apr 17 18:53 /etc/letsencrypt/archive/[DOMAIN NAME REDACTED]/fullchain1.pem  localhost:/etc/nginx$ sudo tail -n 1 /var/log/nginx/error.log  2019/06/19 18:51:47 [error] 5676#5676: *251 cannot load certificate "/etc/letsencrypt/live/[DOMAIN NAME REDACTED]/fullchain.pem": BIO_new_file() failed (SSL: error:0200100D:system library:fopen:Permission denied:fopen('/etc/letsencrypt/live/[DOMAIN NAME REDACTED]/fullchain.pem','r') error:2006D002:BIO routines:BIO_new_file:system lib) while SSL handshaking, client: [IP ADDRESS REDACTED], server: 0.0.0.0:443  localhost:/etc/nginx$ ps -ef | grep nginx | grep -v grep  www-data  5676 24653  0 18:49 ?        00:00:00 nginx: worker process  root     24653     1  0 15:08 ?        00:00:00 nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf  localhost:/etc/nginx$ sudo ls -l /etc/letsencrypt/live/[DOMAIN NAME REDACTED]/fullchain.pem  lrwxrwxrwx 1 root root 56 Apr 17 18:53 /etc/letsencrypt/live/[DOMAIN NAME REDACTED]/fullchain.pem -> ../../archive/[DOMAIN NAME REDACTED]/fullchain1.pem  localhost:/etc/nginx$ sudo ls -l /etc/letsencrypt/archive/[DOMAIN NAME REDACTED]/fullchain1.pem  -rw-r--r-- 1 root root 3591 Apr 17 18:53 /etc/letsencrypt/archive/[DOMAIN NAME REDACTED]/fullchain1.pem  localhost:/etc/nginx$ openssl  OpenSSL> version  OpenSSL 1.0.2g  1 Mar 2016  OpenSSL> ^C  localhost:/etc/nginx$ nginx -v  nginx version: nginx/1.17.0  

When I replace $ssl_server_name with the domain name in the nginx configuration then there is no permissions error reading the very same cert file, and the page loads in the browser.

Why does using the variable in the cert path not work?

UPDATE:

I updated the archive folder group to www-data, still seing the permissions error:

localhost:/etc/nginx$ sudo chgrp -R www-data /etc/letsencrypt/archive  localhost:/etc/nginx$ sudo namei -l /etc/letsencrypt/archive/[DOMAIN NAME REDACTED]/fullchain1.pem  f: /etc/letsencrypt/archive/[DOMAIN NAME REDACTED]/fullchain1.pem  drwxr-xr-x root root     /  drwxr-xr-x root root     etc  drwxr-xr-x root root     letsencrypt  drwx------ root www-data archive  drwxr-xr-x root www-data [DOMAIN NAME REDACTED]  -rw-r--r-- root www-data fullchain1.pem  localhost:/etc/nginx$ sudo tail -n 1 /var/log/nginx/error.log  2019/06/20 07:18:58 [error] 4897#4897: *6 cannot load certificate "/etc/letsencrypt/live/[DOMAIN NAME REDACTED]/fullchain.pem": BIO_new_file() failed (SSL: error:0200100D:system library:fopen:Permission denied:fopen('/etc/letsencrypt/live/[DOMAIN NAME REDACTED]/fullchain.pem','r') error:2006D002:BIO routines:BIO_new_file:system lib) while SSL handshaking, client: [IP ADDRESS REDACTED], server: 0.0.0.0:443  

UPDATE 2:

Added group read and execute permissions to archive folder, still seing the permissions error:

localhost:/etc/nginx$ sudo chmod g+r /etc/letsencrypt/archive  localhost:/etc/nginx$ sudo chmod g+x /etc/letsencrypt/archive  localhost:/etc/nginx$ sudo namei -l /etc/letsencrypt/archive/ [DOMAIN NAME REDACTED]/fullchain1.pem  f: /etc/letsencrypt/archive/[DOMAIN NAME REDACTED]/fullchain1.pem  drwxr-xr-x root root     /  drwxr-xr-x root root     etc  drwxr-xr-x root root     letsencrypt  drwxr-x--- root www-data archive  drwxr-xr-x root www-data  [DOMAIN NAME REDACTED]  -rw-r--r-- root www-data fullchain1.pem  localhost:/etc/nginx$ sudo tail -n 1 /var/log/nginx/error.log  2019/06/20 07:39:58 [error] 4897#4897: *22 cannot load certificate "/etc/letsencrypt/live/[DOMAIN NAME REDACTED]/fullchain.pem": BIO_new_file() failed (SSL: error:0200100D:system library:fopen:Permission denied:fopen('/etc/letsencrypt/live/ [DOMAIN NAME REDACTED]/fullchain.pem','r') error:2006D002:BIO routines:BIO_new_file:system lib) while SSL handshaking, client: [IP ADDRESS REDACTED], server: 0.0.0.0:443  

UPDATE 3:

Tried becoming www-data using sudo but got an error:

localhost:/etc/nginx$ sudo su - www-data  No directory, logging in with HOME=/  This account is currently not available.  

Update 4:

I also updated the permissions on the symlinked path live folder, still seing the permissions error:

localhost:/etc/nginx$ ll /etc/letsencrypt | grep live  drwx------   5 root root     4096 Apr 17 18:53 live/  localhost:/etc/nginx$ sudo chgrp www-data /etc/letsencrypt/live  localhost:/etc/nginx$ sudo chmod g+rx /etc/letsencrypt/live  localhost:/etc/nginx$ ll /etc/letsencrypt | grep live  drwxr-x---   5 root www-data 4096 Apr 17 18:53 live/  localhost:/etc/nginx$ sudo namei -l /etc/letsencrypt/live  f: /etc/letsencrypt/live  drwxr-xr-x root root     /  drwxr-xr-x root root     etc  drwxr-xr-x root root     letsencrypt  drwxr-x--- root www-data live  localhost:/etc/nginx$ sudo tail -n 1 /var/log/nginx/error.log  2019/06/20 07:57:48 [error] 5104#5104: *17 cannot load certificate key "/etc/letsencrypt/live/[DOMAIN NAME REDACTED]/privkey.pem": BIO_new_file() failed (SSL: error:0200100D:system library:fopen:Permission denied:fopen('/etc/letsencrypt/live/[DOMAIN NAME REDACTED]/privkey.pem','r') error:2006D002:BIO routines:BIO_new_file:system lib) while SSL handshaking, client: [IP ADDRESS REDACTED], server: 0.0.0.0:443  

Update 5:

Listing the permissions of all dirs in path including symlinks:

localhost:/etc/nginx$ sudo namei -l /etc/letsencrypt/live/[DOMAIN NAME REDACTED]/fullchain.pem  f: /etc/letsencrypt/live/[DOMAIN NAME REDACTED]/fullchain.pem  drwxr-xr-x root root     /  drwxr-xr-x root root     etc  drwxr-xr-x root root     letsencrypt  drwxr-x--- root www-data live  drwxr-xr-x root root     [DOMAIN NAME REDACTED]  lrwxrwxrwx root root     fullchain.pem -> ../../archive/[DOMAIN NAME REDACTED]/fullchain1.pem  drwxr-x--- root www-data   ..  drwxr-xr-x root root       ..  drwxr-x--- root www-data   archive  drwxr-xr-x root www-data   [DOMAIN NAME REDACTED]  -rw-r--r-- root www-data   fullchain1.pem  

Update 6:

Tried temporarily changing the shell for www-data user, became www-data using sudo and tested reading the cert was possible, but the permission error is still happening:

localhost:/etc/nginx$ cat /etc/passwd | grep www-data  www-data:x:33:33:www-data:/var/www:/usr/sbin/nologin  localhost:/$ cat /etc/passwd | grep www-data  www-data:x:33:33:www-data:/var/www:/bin/bash  localhost:/etc/nginx$ sudo vim /etc/passwd  localhost:/etc/nginx$ sudo su - www-data  No directory, logging in with HOME=/  localhost:01:/$ whoami  www-data  localhost:/$ cat /etc/letsencrypt/live/[DOMAIN NAME REDACTED]/fullchain.pem  -----BEGIN CERTIFICATE-----  [REDACTED CERT]  -----END CERTIFICATE-----  -----BEGIN CERTIFICATE-----  [REDACTED CERT]  -----END CERTIFICATE-----  localhost:/$ exit  logout  localhost:/etc/nginx$ sudo tail -n 1 /var/log/nginx/error.log  2019/06/20 08:40:23 [error] 5259#5259: *14 cannot load certificate key "/etc/letsencrypt/live/[DOMAIN NAME REDACTED]/privkey.pem": BIO_new_file() failed (SSL: error:0200100D:system library:fopen:Permission denied:fopen('/etc/letsencrypt/live/[DOMAIN NAME REDACTED]/privkey.pem','r') error:2006D002:BIO routines:BIO_new_file:system lib) while SSL handshaking, client: [IP ADDRESS REDACTED], server: 0.0.0.0:443  

Update 7:

Tried exporting the certs to another folder:

localhost:/etc/nginx$ mkdir /tmp/exported-certs  localhost:/etc/nginx$ sudo rsync -razL /etc/letsencrypt/live/ /tmp/exported-certs  localhost:/etc/nginx$ sudo ls -l /tmp/exported-certs/[DOMAIN NAME REDACTED]/fullchain.pem  -rw-r--r-- 1 root www-data 3591 Apr 17 18:53 /tmp/exported-certs/[DOMAIN NAME REDACTED]/fullchain.pem  localhost:/etc/nginx$ sudo ls -l /tmp/exported-certs/[DOMAIN NAME REDACTED]/privkey.pem  -rw------- 1 root www-data 1704 Apr 17 18:53 /tmp/exported-certs/[DOMAIN NAME REDACTED]/privkey.pem  localhost:/etc/nginx$ sudo namei -l /tmp/exported-certs/[DOMAIN NAME REDACTED]/fullchain.pem  f: /tmp/exported-certs/[DOMAIN NAME REDACTED]/fullchain.pem  drwxr-xr-x root root     /  drwxrwxrwt root root     tmp  drwxr-x--- root www-data exported-certs  drwxr-xr-x root root     [DOMAIN NAME REDACTED]  -rw-r--r-- root www-data fullchain.pem  localhost:/etc/nginx$ sudo vim nginx.conf  localhost:/etc/nginx$ cat nginx.conf | grep -B 3 -A 6 '$ssl_server_name'    server {      listen 443 ssl http2 default_server;        ssl_certificate /tmp/exported-certs/$ssl_server_name/fullchain.pem;      ssl_certificate_key /tmp/exported-certs/$ssl_server_name/privkey.pem;        location / {        include /etc/nginx/snippets/set-headers.conf;        proxy_pass http://localhost:8080;      }    }  localhost:/etc/nginx$ sudo nginx -t  nginx: the configuration file /etc/nginx/nginx.conf syntax is ok  nginx: configuration file /etc/nginx/nginx.conf test is successful  localhost:/etc/nginx$ sudo nginx -s reload  localhost:/etc/nginx$ sudo tail -n 1 /var/log/nginx/error.log  2019/06/20 10:52:48 [notice] 6250#6250: signal process started  localhost:/etc/nginx$ sudo tail -n 1 /var/log/nginx/error.log  2019/06/20 10:53:08 [error] 6251#6251: *67 cannot load certificate key "/tmp/exported-certs/[DOMAIN NAME REDACTED]/privkey.pem": BIO_new_file() failed (SSL: error:0200100D:system library:fopen:Permission denied:fopen('/tmp/exported-certs/[DOMAIN NAME REDACTED]/privkey.pem','r') error:2006D002:BIO routines:BIO_new_file:system lib) while SSL handshaking, client: [IP ADDRESS REDACTED], server: 0.0.0.0:443  

Then decided to check again as the www-data user because last time I checked it was when the certs were in the letsencrypt folder, also this time I remembered to check both cert and key:

localhost:/etc/nginx$ cat /etc/passwd | grep www-data  www-data:x:33:33:www-data:/var/www:/usr/sbin/nologin  localhost:/etc/nginx$ sudo vim /etc/passwd  localhost:/etc/nginx$ cat /etc/passwd | grep www-data  www-data:x:33:33:www-data:/var/www:/bin/bash  localhost:/etc/nginx$ sudo su - www-data  No directory, logging in with HOME=/  localhost:/$ cat /tmp/exported-certs/[DOMAIN NAME REDACTED]/fullchain.pem  -----BEGIN CERTIFICATE-----  [CERT REDACTED]  -----END CERTIFICATE-----  -----BEGIN CERTIFICATE-----  [CERT REDACTED]  -----END CERTIFICATE-----  localhost:/$ cat /tmp/exported-certs/[DOMAIN NAME REDACTED]/privkey.pem  cat: /tmp/exported-certs/[DOMAIN NAME REDACTED]/privkey.pem: Permission denied   <---- THERE IT IS!  localhost:/$ ls -l /tmp/exported-certs/[DOMAIN NAME REDACTED]/privkey.pem  -rw------- 1 root www-data 1704 Apr 17 18:53 /tmp/exported-certs/[DOMAIN NAME REDACTED]/privkey.pem  localhost:/$ exit  logout  localhost:/etc/nginx$ sudo chmod g+r /tmp/exported-certs/[DOMAIN NAME REDACTED]/privkey.pem  localhost:/etc/nginx$ sudo su - www-data  No directory, logging in with HOME=/  localhost:/$ cat /tmp/exported-certs/[DOMAIN NAME REDACTED]/privkey.pem  -----BEGIN PRIVATE KEY-----  [CERT REDACTED]  -----END PRIVATE KEY-----  localhost:/$ exit  logout  localhost:/etc/nginx$ sudo tail -n 1 /var/log/nginx/access.log  139.162.202.226 - [DOMAIN NAME REDACTED]:443 - [20/Jun/2019:11:04:08 +0100] "GET / HTTP/2.0" 304 0 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/11.1.2 Safari/605.1.15"  

Once I addd the group read permission for www-data to the privkey.pem, the browser was able to load the page. :)

Thanks to all that commented on this question.

Connecting Azure VM to domain with Azure AD DS - Event ID: 4097 "The user name or password is incorrect."

Posted: 04 Sep 2021 08:07 PM PDT

When trying to connect an Azure VM to Azure AD DS, I get the message below, even though I have logged in successfully with the username/password elsewhere, and the account used to connect to the domain is a member of "AAD DC Administrators":

"The user name or password is incorrect."

In Event Viewer under "Windows Log > System" I get the corresponding error message:

"The machine ***** attempted to join the domain *******.onmicrosoft.com but failed. The error code was 1326." (VM and domain removed)

Event ID: 4097  NetStatusCode: 1326  

Note: When I do an nslookup for *******.onmicrosoft.com on the Azure VM it is able to resolve the DNS.

Any suggestions on what I need to do to join the domain?

Debian - Installation of LSI MegaRAID SNMP AGENT

Posted: 04 Sep 2021 04:08 PM PDT

My OS is :

Distributor ID: Debian  Description:    Debian GNU/Linux 8.9 (jessie)  Release:        8.9  Codename:       jessie  

I succeded to install the MegaRaid Storage Manager and I can use StorCli.

# dpkg --install lib-utils2_1.00-9_all.deb without errors  # dpkg --install megaraid-storage-manager_17.05.00-3_all.deb without errors    # ./storcli64 /c0 /vall show  Controller = 0  Status = Success  Description = None      Virtual Drives :  ==============    ---------------------------------------------------------------  DG/VD TYPE  State Access Consist Cache Cac sCC       Size Name  ---------------------------------------------------------------  0/0   RAID1 Optl  RW     Yes     RWBD  -   ON  278.464 GB OS  1/1   RAID5 Optl  RW     Yes     RWBD  -   ON    8.180 TB DATA  ---------------------------------------------------------------  

But now, I would install the snmp agent of my RAID controller. I used the rpm and I convert it ina deb with alien :

# dpkg -i sas-snmp_17.05-3_amd64.deb  (Lecture de la base de données... 54953 fichiers et répertoires déjà installés.)  Préparation du dépaquetage de sas-snmp_17.05-3_amd64.deb ...  Dépaquetage de sas-snmp (17.05-3) ...  Paramétrage de sas-snmp (17.05-3) ...  Starting snmpd  /etc/lsi_mrdsnmp/sas/install: 182: [: 0: unexpected operator  [ ok ] Restarting snmpd (via systemctl): snmpd.service.  Starting LSI SNMP Agent  /etc/lsi_mrdsnmp/sas/install: 210: [: 0: unexpected operator  Starting LSI SNMP Agent:  /etc/init.d/lsi_mrdsnmpd: 153: /etc/init.d/lsi_mrdsnmpd: daemon: not found  

I edited /etc/init.d/lsi_mrdsnmpd to resolve the problem of the daemon command by replacing it by :

....  ${agent} -c ${SNMPDCONF}  #daemon ${agent} -c ${SNMPDCONF}  ....  

Moreover, I added a symbolic link to solve a problem of library :

/usr/lib/libsas_objects.so -> /usr/lib64/libsas_objects.so.1  

But now, when I try to start the service :

# ./lsi_mrdsnmpd start  Starting LSI SNMP Agent:  LSI MegaRAID SNMP Agent Ver 3.18.0.5 (Oct 30th, 2012) Started  

I don't have nothing in a ps command. And if I verify the syslog log, I have :

Oct 16 16:43:45 Server1 MegaRAID SNMP AGENT: Error in getting Shared Memory(lsi_mrdsnmpmain)  

If I try to execute manually the command :

# ./lsi_mrdsnmpagent -c /etc/snmp/snmpd.conf  LSI MegaRAID SNMP Agent Ver 3.18.0.5 (Oct 30th, 2012) Started  

Same result on the syslog.

I tried to strace the start of the service : Here th eend of the strace :

16:46:54 fstat(3, {st_dev=makedev(8, 1), st_ino=8128118, st_mode=S_IFREG|0644, st_nlink=1, st_uid=0, st_gid=0, st_blksize=4096, st_blocks=8, st_size=2945, st_atime=2017/10/16-09:39:01, st_mtime=2017/05/02-08:24:20, st_ctime=2017/05/02-08:24:20}) = 0  16:46:54 fstat(3, {st_dev=makedev(8, 1), st_ino=8128118, st_mode=S_IFREG|0644, st_nlink=1, st_uid=0, st_gid=0, st_blksize=4096, st_blocks=8, st_size=2945, st_atime=2017/10/16-09:39:01, st_mtime=2017/05/02-08:24:20, st_ctime=2017/05/02-08:24:20}) = 0  16:46:54 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f712d916000  16:46:54 read(3, "TZif2\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\f\0\0\0\f\0\0\0\0\0\..., 4096) = 2945  16:46:54 lseek(3, -1863, SEEK_CUR)      = 1082  16:46:54 read(3, "TZif2\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\r\0\0\0\r\0\0\0\0\0\0\0\270\0\0\0\r\0\0\0\37\377\377\377\377k\310J\213\377\377\377\377\221`P\213\377\377\377\377\233Gx\360\377\377\377\377\233\327,p\377\377\377\377\234\274\221p\377\377\377\377\235\300H\360\377\377\377\377\236\211\376p\377\377\377\377\2..."..., 4096) = 1863  16:46:54 close(3)                       = 0  16:46:54 munmap(0x7f712d916000, 4096)   = 0  16:46:54 socket(PF_LOCAL, SOCK_DGRAM|SOCK_CLOEXEC, 0) = 3  16:46:54 connect(3, {sa_family=AF_LOCAL, sun_path="/dev/log"}, 110) = 0  16:46:54 sendto(3, "<30>Oct 16 16:46:54 LSI MegaRAID SNMP Agent: Agent Ver 3.18.0.5 (Oct 30th, 2012) Started\n", 89, MSG_NOSIGNAL, NULL, 0) = 89  16:46:54 clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0x7f712d90ba10) = 4715  16:46:54 exit_group(0)                  = ?  16:46:54 +++ exited with 0 +++  

But now, I'm blocked and i don't know to resolve this problem, have you got an idea of that ?

Thanks,

EDIT

Well, today the service is running :

root     16777     1  0 16:35 ?        00:00:00 /etc/lsi_mrdsnmp/lsi_mrdsnmpagent -c /etc/snmp/snmpd.conf  root     16778 16777  0 16:35 ?        00:00:00 /etc/lsi_mrdsnmp/lsi_mrdsnmpagent -c /etc/snmp/snmpd.conf  

youpi!!!

But...when I try to pass a OID to the agent via lsi_mrdsnmpmain it returns nothing and the result code is 1 :

# /usr/sbin/lsi_mrdsnmpmain -g .1.3.6.1.4.1.3582.5.1.1.0  # echo $?  1  

I strace the service lsi_mrdsnmpagent and I can read that each time i try lsi_mrdsnmpmain:

futex(0x7f61f938e000, FUTEX_WAIT, 0, NULL) = 0  write(1, "####INSIDE GET#####\n", 20)   = -1 EBADF (Bad file descriptor)  futex(0x7f61f938e020, FUTEX_WAKE, 1)    = 1  futex(0x7f61f938e000, FUTEX_WAIT, 0, NULL) = 0  write(1, "####INSIDE GET#####\n", 20)   = -1 EBADF (Bad file descriptor)  futex(0x7f61f938e020, FUTEX_WAKE, 1)    = 1  futex(0x7f61f938e000, FUTEX_WAIT, 0, NULL) = 0  write(1, "####INSIDE GET#####\n", 20)   = -1 EBADF (Bad file descriptor)  futex(0x7f61f938e020, FUTEX_WAKE, 1)    = 1  futex(0x7f61f938e000, FUTEX_WAIT, 0, NULL  

And if i strace lsi_mrdsnmpmain I obtain:

15:41:36 rt_sigaction(SIGRT_1, {0x7fe5257a8a40, [], SA_RESTORER|SA_RESTART|SA_SIGINFO, 0x7fe5257b1890}, NULL, 8) = 0  15:41:36 rt_sigprocmask(SIG_UNBLOCK, [RTMIN RT_1], NULL, 8) = 0  15:41:36 getrlimit(RLIMIT_STACK, {rlim_cur=8192*1024, rlim_max=RLIM64_INFINITY}) = 0  15:41:36 shmget(0x884b9, 1024, 0600)    = 131072  15:41:36 shmat(131072, 0, 0)            = 0x7fe525bdc000  15:41:36 futex(0x7fe525bdc000, FUTEX_WAKE, 1) = 1  15:41:36 futex(0x7fe525bdc020, FUTEX_WAIT, 0, NULL) = 0  15:41:36 shmdt(0x7fe525bdc000)          = 0  15:41:36 exit_group(1)                  = ?  15:41:36 +++ exited with 1 +++  

If you have ideas about the EBADF (Bad file descriptor) or ideas to access mib...

Thanks !

Using FirstLogonCommands in an Unattend.xml file

Posted: 04 Sep 2021 05:02 PM PDT

I apologize ahead of time for what is probably a stupid question, but I'm having a hard time figuring this out from the Microsoft Documentation (https://docs.microsoft.com/en-us/windows-hardware/customize/desktop/unattend/microsoft-windows-shell-setup-firstlogoncommands):

If I populate my Unattend.xml file with the 'FirstLogonCommands' setting at the oobeSystem pass, will the commands run once for the first user that logs into the machine, or will the command run once for each user that logs into the machine?

Set credentials/password for remote connection in powershell on Windows Server 2012

Posted: 04 Sep 2021 06:05 PM PDT

I have 2 servers (both Windows Server 2012 R2). They both have an Administrator account with password xxx and the 2 servers are in the same network (domain). I didn't install/configure those servers.

I'm able to execute powershell commands from server 1:

Invoke-Command -ComputerName server01 -ScriptBlock {Get-Culture}  

I can also use this command

Invoke-Command -ComputerName server01 -Credential Administrator -ScriptBlock {Get-Culture}  

There pops up a windows and I have to fill in my password. I want that only the option with Credential/password is allowed and only when this connection comes from server02.

How do I have to achieve this in powershell?

nginx check if filename with different extension exists

Posted: 04 Sep 2021 09:09 PM PDT

If a file with ".html" extension doesn't exist I need to know if the same file exists with ".th.html" extension and make a redirect.

Right now on 404 I'm doing a rewrite and if $request_filename exists I do the redirect.

    try_files $uri $uri/ @thengine;        error_page 404 = @thengine;        location @thengine {              rewrite ^/(.*)\.(htm|html)$ /$1.th.html;                if (-f $request_filename) {                      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;                      proxy_set_header Host $http_host;                      proxy_intercept_errors on;                      proxy_redirect off;                        proxy_pass http://thengine_backend;              }      }  

I'm wondering if there is a better way to do that without rewrite.

Maybe something like

if ($request_filename ~ (some rule to replace extension)){...}  

Thank you.

Edit: All requests from browser will come with .html, but in case the file with .html doesn't exist, I have to check if the same file exists with .th.html and do redirect only on this case.

Edit2: Let's say someone access domain-nginx.com/path/to/index.html

  • nginx must check if file exist, and if it does, show the page
  • if file doesn't exist, look for index.th.html
  • if index.th.html doesn't exist give directly 404
  • if index.th.html DOES exist set some headers and serve domain-app.com/path/to/index.th.html (here is an application that will process these kind of templates)

All this time the user must see only domain-nginx.com/path/to/index.html and not see any redirect or the url to change.

Notice that .th.html is handled by another application

No internet access when toggling `redirect-gateway` in OpenVPN client config

Posted: 04 Sep 2021 08:30 PM PDT

I have a router with IP 192.168.1.1 subnetting 192.168.1.0/24.

On that subnet, a Synology NAS has an IP of 192.168.1.181 and is running a VPN server using subnet 192.168.2.0/24.

When I connect a client to that server from outside both networks, I get assigned 192.168.2.6. From that client I can ping machines on 192.168.1.0/24 (192.168.1.17 & 192.168.1.181 for example) and 192.168.1.1 & 192.168.2.1.

From machines already on 192.168.1.0/24, I can ping the VPN client (192.168.2.6) after adding a static route of route add 192.168.2.0 mask 255.255.255.0 192.168.1.181 (windows).

Before adding the redirect-gateway line to the client config, I would be able to access the internet while on the VPN but was unable to access local web services like a router service or the Synology NAS web service (running within 192.168.1.0/24). I thought this was maybe because the external IP (whatmyip.org) from a VPN client showed the same external address as if I was not connected to the VPN.

After adding the redirect-gateway line to the client config, I verified I had the correct external IP (matches the 192.168.1.0/24 clients external IP) when connected but could not access external sites (google.com) but could access internal web services (192.168.1.1's & 192.168.1.181's).

What am I missing?


Weird observation, not sure why but the client (192.168.2.6) gets a DHCP & gateway server of 192.168.2.5 which as far as I know, isn't anything that exists. I can't ping it. 192.168.2.1 is definitely the VPN server and I can access it's web service (192.168.1.181 on 192.168.1.0/24).

Connected client ipconfig /all:

Description . . . . . . . . . . . : TAP-Windows Adapter V9  DHCP Enabled. . . . . . . . . . . : Yes  Autoconfiguration Enabled . . . . : Yes  IPv4 Address. . . . . . . . . . . : 192.168.2.6(Preferred)  Subnet Mask . . . . . . . . . . . : 255.255.255.252  Lease Obtained. . . . . . . . . . : Thursday, August 13, 2015 11:55:43 AM  Lease Expires . . . . . . . . . . : Friday, August 12, 2016 11:55:42 AM  Default Gateway . . . . . . . . . : 192.168.2.5  DHCP Server . . . . . . . . . . . : 192.168.2.5  DNS Servers . . . . . . . . . . . : 192.168.2.1  NetBIOS over Tcpip. . . . . . . . : Enabled  

HAProxy TProxy support CentOS 7

Posted: 04 Sep 2021 04:44 PM PDT

PER: Howto Transparent proxying and binding with HAProxy and ALOHA load-balancer

Says following kernel flags set:

  • CONFIG_NETFILTER_TPROXY
  • CONFIG_NETFILTER_XT_TARGET_TPROXY

in /boot/config-<kernel> (3.10.299---something_x86_64) I see:

  • CONFIG_NETFILTER_XT_TARGET_TPROXY

Build kernel following steps to add TProxy support for a post was for CentOS 6 and I'm left with the same CONFIG_NETFILTER_XT_TARGET_TPROXY flag set.

Do I have enough for transparent proxy already? Is there a difference for CONFIG_NETFILTER_TPROXY kernel flag from kernel in CentOS 6 2.x vs 3.10.x that I'm missing?

Draytek 2830, Multiple VLANS on Same Port

Posted: 04 Sep 2021 09:09 PM PDT

The Kit

Ubiquiti Unifi Long Range Wireless Access Point Cisco SG200-08P Switch (VLAN, POE Support) Draytek 2830VN Router

The Problem

I need to enable Multiple VLANS against a single port on the Draytek 2830VN Router as I have Two Networks setup on the Ubiquiti Wireless Access Point;

  1. SSID#1 10.0.21.1 255.255.255.0 VLAN40
  2. SSID#2 192.168.13.1 255.255.255.0 VLAN10

Usually I do this with a PFSense Linux Machine and multiple NICS but this time around I thought I would use the Draytek to do all the work instead and take out the additional device requirement.

Draytek VLAN Configuration

Draytek VLAN Configuration

Unifi Wireless Networks and VLAN Tags

Unifi VLAN

Unifi VLAN

If I was to guess, I would say it has something to do with VLAN4 through 7 as there are only four physical lan ports in the router...

Does anyone know how to set this up on the Draytek? Can this be done on the Draytek? I can only seem to get one VLAN allocated to a physical port.

UPDATE

I have Managed to get the VLANS working on the Draytek as pictured above, however the Unifi Wireless Access Point is not obtaining an IP Address and I not dishing them out via DHCP. Flashing intermittently Green.

SSL Certificate Not Getting Refreshed

Posted: 04 Sep 2021 03:05 PM PDT

I am trying to change the SSL Certificate for my website. I got the new certificate issued by Comodo and installed it on my web servers. My servers are running IIS7.0.

I also binded the https protocol for my websites to the new certificate. Then I deleted the old certificate(which was expired) from IIS.

Then I restarted the website I restarted the IIS Service from an administrator command prompt. I rebooted the servers

However, when I try to open my website in a browser, it is still giving me the expired certificate error and showing the information of the older certificate in the certificate info box.

Does anyone have an idea what might be going wrong? Does the new SSL certificate take some time propagating across the DNS?

(My servers are hosted in AWS Cloud as EC2 instances)

Any help or suggestions would be appreciated. Thanks

Redirect from Old Site to New Site on different folder

Posted: 04 Sep 2021 05:02 PM PDT

I want to redirect all request from a url host www.hostname1.com (including all subdirectores-www.hostname1.com/....) to a different url with a different host, www.newHost.com. I have already made the change in the DNS but am wondering what changes I should make on the server on which www.newHost.com is hosted so that the redirect takes place with the new url displayed on the browser.

I have look at the IIS. Under the configurations for www.newHost.com, I can bind www.hostname1.com to the same IP as www.newHost.com but this works only for the home page for www.hostname1.com and does not rewrite the url address in the browser window.

Please advise on how to make this change.

winbind separator and group name behavior in getent group, constantly changing

Posted: 04 Sep 2021 10:04 PM PDT

I have a problem that occasionally apprears-dissapears and it drives me nuts.

My Debian servers are authenticated against AD and only "linuxadmins" group member can SSH to server and "sudo su".

SSH login works, no problems in there but users are getting errors "user xyz is not in sudoers " while using sudo

my /etc/sudoers contains AD group name

%linuxadmins ALL =(ALL) ALL  

And samba conf

#GLOBAL PARAMETERS  [global]     workgroup = RKAS     realm = RKAS.RK     preferred master = no     server string = SEP DEV Server     security = ADS     encrypt passwords = true     log level = 3     log file = /var/log/samba/%m     max log size = 50     printcap name = cups     printing = cups     winbind enum users = Yes     winbind enum groups = Yes     winbind use default domain = Yes     winbind nested groups = Yes     #winbind separator = +     #idmap uid = 600-20000     #idmap gid = 600-20000     ;template primary group = "Domain Users"     template shell = /bin/bash     template homedir = /home/%D/%U     winbind offline logon = yes     winbind refresh tickets = yes  

The problem lies in group's separator that samba handles.

getent group | grep linuxadmins  

gives back two different results in between few minutes

linuxadmins:x:784:xyz  

or

\linuxadmins:x:784:xyz  

Users are only able to sudo if there's no baskslash.

What's wrong? I cannot understand why it constantly adding backslash and removing it in the group names?

common-account:

account [success=2 new_authtok_reqd=done default=ignore]        pam_unix.so  account [success=1 new_authtok_reqd=done default=ignore]        pam_winbind.so  account required                        pam_permit.so  

common-auth:

auth    [success=2 default=ignore]      pam_unix.so nullok_secure  auth    [success=1 default=ignore]      pam_winbind.so krb5_auth krb5_ccache_type=FILE cached_login require_membership_of=linuxadmins try_first_pass  auth    required                        pam_permit.so  

and no common-system, only session

session     required    pam_mkhomedir.so umask=0022 skel=/etc/skel  

I must add that this behavior is happening through all linux servers

IIS 7 and ASP.NET State Service Configuration

Posted: 04 Sep 2021 04:08 PM PDT

We have 2 web servers load balanced and we wanted to get away from sticky sessions for obvious reasons. Our attempted approach is to use the ASP.NET State service on one of the boxes to store the session state for both. I realize that it's best to have a server dedicated to storing sessions but we don't have the resources for that.

I've followed these instructions to no avail. The session still isn't being shared between the two servers.

I'm not receiving any errors. I have the same machine key for both servers, and I've set the application ID to a unique value that matches between the two servers. Any suggestions on how I can troubleshoot this issue?

Update:

I turned on the session state service on my local machine and pointed both servers to the ip address on my local machine and it worked as expected. The session was shared between both servers. This leads me to believe that the problem might be that I'm not using a standalone server as my state service. Perhaps the problem is because I am using the ip address 127.0.0.1 on one server and then using a different ip address on the other server. Unfortunately when I try to use the network ip address as opposed to localhost the connection doesn't seem to work from the host server. Any insight on whether my suspicions are correct would be appreciated.

IIS: acess denied to Web.Config file

Posted: 04 Sep 2021 06:05 PM PDT

I'm trying to set up a new website in a Windows Server 2003. In this server there is already a website, classic ASP, in port 80. I'm configuring this new one (ASP.NET 3.5) in port 82 with, actually, .NET Framework 4.0, as I keep getting an error when trying to install 3.5.

When accesing the website I get an error saying access to web.config file is denied, if I access a test html file it loads ok.

I also tryed adding an impersonate clause in web.config, for the machine admin user, but no success.

Folder and files have correct permissions for IUSR_SERVERNAME, web server extensions are active and have permissions also (the .NET framework ones). User ASP.NET does not exist in this machine (I read somewhere you also need to give access to this user) so I don't know what else to try.

Help please. Thank you

Changing a Set-Cookie header using mod_rewrite/mod_proxy

Posted: 04 Sep 2021 02:00 PM PDT

I have a bunch of CGI scripts, which are served using HTTPS. They can only be reached on the intranet, not from the outside. They set a cookie with the attribute 'Secure', so that it can only be send via HTTPS. There is also a reverse proxy to one of these scripts, unfortunately using plain HTTP. When a response comes in from my CGI-script with a secure cookie, it is not being passed on via HTTP (after all, that is what that attribute is for). I need however, an exception to this rule.

Is it possible to use mod_rewrite/mod_proxy or something similar, to change the Set-Cookie header in the response coming from my CGI script and remove the Secure, such that the cookie can be passed back to the user using the unsafe HTTP connection? I understand that this defeats the purpose of the Secure in the first place, but I need this as a temporary work around.

I have searched the web and found how to add a Set-Cookie header using mod_rewrite, and I have also found how to retrieve the value of a cookie coming from the client in a cookie header. What I have not yet found is how to extract the Set-Cookie header received in the response of a script I am proxying for. Is that possible? How would I do that?

SBS 2008 Backup Drive Full - Error Code '2147942512'

Posted: 04 Sep 2021 02:00 PM PDT

We are using Windows Backup on SBS 2008 SP2 and backing up to 1TB external hard drives. Recently after switching drives our backup started failing because the backup drive is full and auto-delete isn't automatically deleting older backups/show copies. I'm trying to get more information to help me effectively prevent this problem from reoccurring in the future.

How I can tell that the drive is getting full:
In the event viewer under Windows Logs > Application, I'm seeing Event ID 517 but it fails to show an intelligible description. However, under Applications and Services Logs > Microsoft > Windows > Backup > Operational, I'm seeing an event with the ID of 5 and a description like this: Backup started at '10/4/2011 12:30:12 PM' failed with following error code '2147942512'.

One of the most informative posts I've found on this error is located on Microsoft's Technet Forums here. In that post, a Microsoft representative gives this hazy explanation:

auto-delete feature to ensure that at least some old backup copies are maintained on the disk -- does not automatically delete backups if space utilization by older copies is less than 1/8 of the disk size or in other words, 13% of the disk size. that means if the one full backup copy does not fit in the 7/8 of the disk size, backup may fail with disk full error. auto-delete will not automatically delete older versions to reclaim more older versions of backup.

In the above explanation, I do not understand what is meant by "older copies" except that it appears that anything older than the very last shadow copy would be considered "older copies". I'm going to make the assumption that this problem where auto-delete will not work will affect any hard drive that is large enough to make an effective backup drive, or in other words, any hard drive that is large enough to hold more than one backup/shadow copy at once.

The same MS representative proposes the solution of using a larger backup drive. I can't understand how this will help. It appears to me it will simply delay the problem until a later date.

In order to resolve this problem for now, I did the following:

  1. Assign the backup drive a disk letter under disk management.
  2. Run the command line with Administrative rights.
  3. diskshadow.exe [enter]
  4. delete shadows oldest x: [enter] (where X: is the letter you assigned your backup drive)
  5. I manually ran the above command some 60 or 80 times to free up about 200 GB of space on my 1 Terrabyte External Hard drive.

However, I do not feel this is a satisfactory solution to prevent the problem from happening again in the future. Does anyone have a solution to prevent your Windows Server backup drive from getting full?

Lighttpd proxy module - use with hostname

Posted: 04 Sep 2021 07:02 PM PDT

I have to proxy a site which is hosted on an external webspace through my lighty on example.org. My config so far:

$HTTP["url"] =~ "^/webmail" {      proxy.server =  ("/webmail/" => (          # this entry should link to example2.org          ("host" => "1.2.3.4", "port" => 80)      ))  }  

The webspace provider has configured my domain as vhost. So if i access http://1.2.3.4/webmail/ lighttpd will only deliver the main site of the webspace provider which says "Site example.org was not found on our server."

Any suggestions how i have to configure lighty to proxy sites that are only hosted as vhost (and do not have an ip on their own)?

IIS no longer saving session variables

Posted: 04 Sep 2021 03:05 PM PDT

I'm running IIS v7 on a Win7 development machine. I have PHP code that saves session variables and calls them back later. This has been working on this machine for some time.

For some reason now, the session variables dissapear immediatly after saving. Code that used to work fine on http://localhost/, suddenly now does not.

I have tested different browsers - the vars dissapear regardless of browser.

I have tested identical code on different servers. The problem exists only on this development machine.

I tried some code that saves a session var, then reads it back and displays it, then shows a link to click on to read it back and display again. What happens is the session var DOES get written and read back and displayed ok. But when you click the link to view it again, it's gone.

I don't recall making any changes to IIS. But I did run several malware scanners and clean-up tools.

Is anyone aware of any setting in IIS that disallows session vars? Any other throughts?

Can I bind a (large) block of addresses to an interface?

Posted: 04 Sep 2021 04:30 PM PDT

I know that the ip tool lets you bind multiple addresses to an interface (eg, http://www.linuxplanet.com/linuxplanet/tutorials/6553/1/). Right now, though, I'm trying to build something on top of IPv6, and it would be really useful to have an entire block of addresses (say, a /64) available, so that programs could pick any address from the range and bind to that. Needless to say, attaching every IP from this range to an interface would take a while.

Does Linux support binding a whole block of addresses to an interface?

Cross domain javascript form filling, reverse proxy

Posted: 04 Sep 2021 10:04 PM PDT

I need a javascript form filler that can bypass the 'same origin policy' most modern browsers implement.

I made a script that opens the desired website/form in a new browser. With the handler, returned by the window.open method, I want to retrieve the inputs with theWindowHandler.document.getElementById('inputx') and fill them (access denied).

Is it possible to solve this problem by using Isapi Rewrite (official site) in IIS 6 acting like a reverse proxy? If so, how would I configure the reverse proxy?

This is how far I got:

RewriteEngine on  RewriteLogLevel 9  LogLevel debug     RewriteRule CarChecker https://the.actualcarchecker.com/CheckCar.aspx$1 [NC,P]  

The rewrite works, http://ourcompany.com/ourapplication/CarChecker, as evident in the logging. From within our companysite I can run the carchecker as if it was in our own domain. Except, the 'same origin policy' is still in force.

Regards,

Michel

No comments:

Post a Comment