Sunday, June 27, 2021

Recent Questions - Server Fault

Recent Questions - Server Fault


Auto Virtual Host - Single Nginx config

Posted: 27 Jun 2021 09:24 PM PDT

I am trying to create a single Nginx config for multiple hosts based on a directory. I followed a guide which seems to work well with standard HTTP setup but when I add the HTTPS 301 redirect, I can an error "invalid redirect". Any ideas on this? Below is my config. Tx

server {    listen x.x.x.x:80;      server_name ~^(?<sname>.+?).domain.com$;      return 301 https://$server_name$request_uri;    }    server {      listen x.x.x.x:443 ssl default_server;      server_name ~^(?<sname>.+?).domain.com$;      root /var/web/$sname;    index index.html index.htm index.php;      charset utf-8;        location / {          try_files $uri $uri/ /index.php?$query_string;      }        location = /favicon.ico { access_log off; log_not_found off; }      location = /robots.txt  { access_log off; log_not_found off; }      ssl_certificate /etc/letsencrypt/live/wildcard.domain.com/fullchain.pem;  ssl_certificate_key /etc/letsencrypt/live/wildcard.domain.com/privkey.pem;    access_log /var/log/nginx/$sname-access.log;  error_log  /var/log/nginx/wildcard-error.log debug;    error_page 404 /index.php;        sendfile off;            location ~ \.php {                  include fastcgi.conf;                  #fastcgi_index index.php;                  include cors_support;                  fastcgi_split_path_info ^(.+\.php)(/.+)$;                  fastcgi_pass unix:/run/php-fpm/php-fpm.sock;          }          location ~ /\.ht {                  deny all;          }    location /.well-known {      root /var/www/html;    }  }  

spamassassin not sending a mail

Posted: 27 Jun 2021 09:21 PM PDT

We have integrated postfix with spamassassin on centos 8. We have integrated spamassassin in postfix using

/ec/postfix/master.cf

smtp      inet  n       -       n       -       -       smtpd  -o content_filter=spamfilter    smtps      inet  n       -       n       -       -         smtpd  -o content_filter=spamfilter    submission inet n - n - - smtpd -o content_filter=spamfilter`    spamfilter            unix  -       n       n       -       -       pipe     flags=Rq user=spamd argv=/usr/bin/spamfilter.sh -oi -f ${sender}  ${recipient}  

/usr/bin/spamfilter.sh

#!/bin/bash    # Simple filter to plug SpamAssassin into the Postfix MTA  #  #  # This script should probably live at /usr/bin/spamfilter.sh  # ... and have 'chown root:root' and 'chmod 755' applied to it.  #  # For use with:  #     Postfix 20010228 or later  #     SpamAssassin 2.42 or later    # Note: Modify the file locations to suit your particular  #       server and installation of SpamAssassin.  # File locations:  # (CHANGE AS REQUIRED TO SUIT YOUR SERVER)    LOGFILE=/var/sentora/spamd/SpamFilterChecking.log  SPAMASSASSIN=/usr/bin/spamc  SENDER_MAIL_ID=$3  BLOCKED_MAIL_LIST_FILE=/var/sentora/spamd/BlockedMails.txt  SENDMAIL=/usr/sbin/sendmail    echo $SENDER_MAIL_ID >> $LOGFILE  FromDomainName=`cut -d '@' -f2 <<< $SENDER_MAIL_ID`  echo "FromDomainName : $FromDomainName" >> $LOGFILE    #Domain Directory for check the domain with us or not  DOMAIN_DIRECTORY_CONF_PATH="/etc/sentora/configs/apache/domains/$FromDomainName.conf"  echo "DOMAIN_DIRECTORY_CONF_PATH : $DOMAIN_DIRECTORY_CONF_PATH" >> $LOGFILE    if [ -f "$DOMAIN_DIRECTORY_CONF_PATH" ];  then      echo "Sender Email $SENDER_MAIL_ID" >> $LOGFILE      #Checking If Blocked mails txt file is exists or not. If doesn't exists we are creating      if [ ! -f $BLOCKED_MAIL_LIST_FILE ]      then          echo "File $BLOCKED_MAIL_LIST_FILE Does not exists so creating " >> $LOGFILE          touch $BLOCKED_MAIL_LIST_FILE          chown spamd:spamd $BLOCKED_MAIL_LIST_FILE      else          #Checking this sender email id is blocked or not          checkBlocked=`grep $SENDER_MAIL_ID $BLOCKED_MAIL_LIST_FILE`          echo "Check Mail Id Output : $checkMailId" >> $LOGFILE          if [ ! -z $checkBlocked ]          then              echo "Mail id $SENDER_MAIL_ID has been Blocked" >> $LOGFILE              exit $?          else              echo "Mail id $SENDER_MAIL_ID Not blocked" >> $LOGFILE          fi      fi      TMPFP=`mktemp`      cat | $SPAMASSASSIN > $TMPFP      echo $TMPFP >> $LOGFILE      echo $@ >> $LOGFILE      echo "Each Mail id : "$To >> $LOGFILE        # Make temp File For Each Email      From=`grep '^From.*$' $TMPFP | grep -E -o "\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,6}\b" | cut -d ':' -f2`      #Getting From email id from mail request      #From=$SENDER_MAIL_ID      #echo "From: $From" >> $LOGFILE      #Getting To email id from mail request      #To=`grep '^To.*$' $TMPFP | grep -E -o "\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,6}\b" | cut -d ':' -f2`      To=$4      echo "To email id: $To" >> $LOGFILE      #Getting Date from mail request      Date=`grep '^Date.*$' $TMPFP | cut -d ',' -f2 | cut -d '+' -f1`      echo "Date: $Date" >> $LOGFILE        #Getting Spam score of the email      SpamScore=`grep 'X-Spam-Status' $TMPFP | grep score | cut -d " " -f3 | cut -d '=' -f2`      echo "Spam Score : $SpamScore" >> $LOGFILE        UserAgent=`grep 'User-Agent' $TMPFP | cut -d ":" -f2`      echo "User Agent : $UserAgent" >> $LOGFILE        # Checking The mail is Incoming or Outgoing by checking the from mail id domain directory is exists with us or not      dom=`echo $To | awk -F\@ '{print $2}'`      dom_low=`echo $dom | awk '{print tolower($0)}'`        if [[ $dom_low =~ "outlook.com" || $dom_low =~ "gmail.com" || $dom_low =~ "hotmail.com" ]]      then          #Checking to email id is valid or not.          PHP_PATH=`whereis php | awk '{print $2}'`          ### by nandhini 2.8 sending mail converter to string lower           sendingmail=${To,,}          echo "$PHP_PATH /usr/bin/validate_outgoing_emailid.php $sendingmail" >> $LOGFILE          out=`$PHP_PATH /usr/bin/validate_outgoing_emailid.php $sendingmail`                    echo "Output of check valid email $sendingmail : $out" >> $LOGFILE          if [[ $out = *"invalid"* ]]          then             echo "Invalid email id $sendingmail" >> $LOGFILE              (                  echo "To: $From"                  echo "Subject: Sending mail to invalid email id"                  echo "Content-Type: text/html"                  echo                  echo "Dear User,<br><br> The email id you trying to send <b>$To</b> is invalid id. Please avoid sending more emails to invalid email ids like this. If you are keep sending then gmail or outlook will block this server."                 echo              ) | $SENDMAIL -t              exit $?          else              echo "Mail id is valid " >> $LOGFILE          fi      fi        MailOutGoingLogDir=/var/sentora/spamd/OutGoingMailLogs      MailOutGoingLogPath=$MailOutGoingLogDir/outmaillog.log      EachMailCountsDir=$MailOutGoingLogDir/EmailsCounts      EachMailCountsFilePath=$EachMailCountsDir/$From      mkdir -p $MailOutGoingLogDir      chown -R spamd:spamd $MailOutGoingLogDir      # Mail Outgoing log file is created, if does not exists      if [ ! -f $MailOutGoingLogPath ]      then          touch $MailOutGoingLogPath      fi        mkdir -p $EachMailCountsDir      chown -R spamd:spamd $EachMailCountsDir      # Each Mail id file is created for store mail counts within hour, if does not exists      if [ ! -f $EachMailCountsFilePath ]      then          touch $EachMailCountsFilePath          echo 0 > $EachMailCountsFilePath      fi      echo "Outgoing Mail so spam filter check" >> $LOGFILE      # Checking the each mail id file is modified with in one hour or not      ModifiedDiff=$(( (`date +%s` - `stat -L --format %Y $EachMailCountsFilePath`) > (60*60) ))      echo "File Modified status if it is 0 then it has modified within one hour, else it will return 1 $ModifiedDiff" >> $LOGFILE      if [ $ModifiedDiff -eq 0 ]      then          COUNT=`cat $EachMailCountsFilePath`          # Checking the email count is greater than 40 with in a hour          if [ $COUNT -ge 40 ]          then              checkBlocked=`grep $From $BLOCKED_MAIL_LIST_FILE`              echo "Check Mail Id Spam Output : $checkMailId" >> $LOGFILE                #Checking this mail id is already added in blacklisted or not.              if [ -z $checkBlocked ]              then                  echo $From >> $BLOCKED_MAIL_LIST_FILE              fi                writeToOutMailLog="$Date ==> $From ==> $To ==> $SpamScore ==> $UserAgent ==> No ==> More than 40 mails has sent with in one hour"              (                  echo "To: $From"                  echo "Subject: Your Mail id $From is Blocked"                  echo "Content-Type: text/html"                  echo                  echo "Dear Client,<br><br> Your Mail id <b>$From</b> has been blocked due to sending more than 40 mails within one Hour. Somebody may be hacked your email or due to Virus/Malware its sending spam emails. So please change your email password strong. And unblock your email using your cPanel."                  echo              ) | $SENDMAIL -t                echo $writeToOutMailLog >> $LOGFILE                #Storing Outgoing mail status In Log              echo $writeToOutMailLog >> $MailOutGoingLogPath              exit $?          fi          ((COUNT++))      else          COUNT=1      fi        #Storing mail count to each mail id path, If spam score is above required spam score      require_spam_score=`grep required_score $SPAM__SCORE_CONFIGURATION_FILE_PATH | cut -d ' ' -f2`      #if [[ $(bc -l <<< "$SpamScore > $require_spam_score") -eq 0 ]]          SpamScore=${SpamScore%.*}      if [[ $SpamScore -ge $require_spam_score ]]      then          echo "Count of mails : $COUNT" >> $LOGFILE          echo $COUNT > $EachMailCountsFilePath      fi        writeToOutMailLog="$Date ==> $From ==> $To ==> $SpamScore ==> $UserAgent ==> Yes"      echo $writeToOutMailLog >> $LOGFILE        #Storing Outgoing mail status In Log      echo $writeToOutMailLog >> $MailOutGoingLogPath          # Checking 100 Email per day, if outgoing email id is Gmail,yahoo,rediff,hotmail      # Start      EachEmailCountsPerdayDir=$MailOutGoingLogDir/EmailCountsPerday      EachMailCountsPerdayFilePath=$EachEmailCountsPerdayDir/$From        mkdir -p $EachEmailCountsPerdayDir      chown -R spamd:spamd $EachEmailCountsPerdayDir      # Each Mail id file is created for store mail counts within hour, if does not exists      if [ ! -f $EachMailCountsPerdayFilePath ]      then          touch $EachMailCountsPerdayFilePath          echo "gmail - 0" > $EachMailCountsPerdayFilePath          echo "yahoo - 0" >> $EachMailCountsPerdayFilePath          echo "hotmail - 0" >> $EachMailCountsPerdayFilePath          echo "rediff - 0" >> $EachMailCountsPerdayFilePath      else          CURRENT_TIME=`date "+%Y-%m-%d %H:%M:%S"  | awk '{print $2}' | cut -d ':' -f1`          FILE_MODIFIED_DATE=`stat -c %y $EachMailCountsPerdayFilePath | awk '{print $1}'`          CURRENT_DATE=`date "+%Y-%m-%d"`          FILE_MODIFIED_DIFF=$(( ($(date -d "$CURRENT_DATE UTC" +%s) - $(date -d "$FILE_MODIFIED_DATE UTC" +%s) )/(60*60*24) ))          if [[ $CURRENT_TIME -ge 0 && $FILE_MODIFIED_DIFF -gt 0 ]]          then              echo "Reset Outgoing mail data per day 100 limit: " >> $LOGFILE              echo "gmail - 0" > $EachMailCountsPerdayFilePath              echo "yahoo - 0" >> $EachMailCountsPerdayFilePath              echo "hotmail - 0" >> $EachMailCountsPerdayFilePath              echo "rediff - 0" >> $EachMailCountsPerdayFilePath          fi      fi      for addr in $(echo $To | tr "\n" "\n")      do          echo "Email : $addr"      done        dom=`echo $To | awk -F\@ '{print $2}'`      dom_low=`echo $dom | awk '{print tolower($0)}'`      echo $dom_low >> $LOGFILE      if [[ $dom_low =~ "gmail.com" || $dom_low =~ "hotmail.com" || $dom_low =~ "rediffmail" || $dom_low =~ "yahoo" || $dom_low =~ "ymail" || $dom_low =~ "outlook" ]]      then          if [[ $dom_low =~ "gmail.com" ]]          then              SearchToAddress="gmail"          fi          if [[ $dom_low =~ "hotmail.com" || $dom_low =~ "outlook" ]]          then              SearchToAddress="hotmail"          fi          if [[ $dom_low =~ "rediffmail" ]]          then              SearchToAddress="rediff"          fi          if [[ $dom_low =~ "yahoo" || $dom_low =~ "ymail" ]]          then              SearchToAddress="yahoo"          fi          echo "Sending to $SearchToAddress"          echo "Checking email count per day of $SearchToAddress : grep -n "$SearchToAddress" $EachMailCountsPerdayFilePath |cut -f1 -d:" >> $LOGFILE          LINE_NUMBER=`grep -n "$SearchToAddress" $EachMailCountsPerdayFilePath |cut -f1 -d:`          #COUNT=`grep -r $SearchToAddress $EachMailCountsPerdayFilePath | awk '{print tolower($3)}'`          now=$(date)          month=`echo $now | awk '{print $2}'`          todaydate=`echo $now | awk {'print $3'}`          todaydate_count="${#todaydate}"          if [ $todaydate_count == 1 ]; then                  COUNT=`grep sasl_username=$From /var/log/maillog | grep "$month  $todaydate" | awk {'print $NF'} |sort |uniq -c |sort -n | awk {'print $1'}`          else                  COUNT=`grep sasl_username=$From /var/log/maillog | grep "$month $todaydate" | awk {'print $NF'} |sort |uniq -c |sort -n | awk {'print $1'}`          fi          # Checking the email count is greater than 100 by today                  # code add by Kesav                  WHERE_MYSQL=`whereis mysql | awk '{ print $2 }'`                  DB_MAIL_COUNT=""                  DB_NAME='sentora_core'                  username='mysqlspamd'                  password='WnE56amTlFy4O1U8'                  DB_MAIL_COUNT=$(echo "select mailperhrlimt_size from x_mailboxes where mb_address_vc='$From';" | $WHERE_MYSQL --user=$username --password=$password --socket=/usr/local/mysql/mysql.sock sentora_core --skip-column-names 2>/dev/null)            if [ $COUNT -gt $DB_MAIL_COUNT ]          then              echo "Today email count for the email id $From to $SearchToAddress is greater than $DB_MAIL_COUNT so block" >> $LOGFILE              (                  echo "To: $From"                  echo "Subject: Your Mail id $From has sent more than $DB_MAIL_COUNT email today to $SearchToAddress"                  echo "Content-Type: text/html"                  echo                  echo "Dear Client,<br><br> From this email $From you have sent $COUNT emails per day that is more than $DB_MAIL_COUNT emails per day you have set for $From email account . If you send more than emails per day, your email ID most likely be black listed. Your domain and IP address will be black listed. If you want to send marketing Email or spamming, which is not allowed with Ovi Hosting servers. "                  echo              ) | $SENDMAIL -t              exit $?          else              echo "Email not more than $DB_MAIL_COUNT so update the count for $SearchToAddress" >> $LOGFILE              ((COUNT++))              ReplaceValue="$SearchToAddress - $COUNT"              echo $ReplaceValue              sed -i "${LINE_NUMBER}s/.*/$ReplaceValue/" "$EachMailCountsPerdayFilePath"              echo "sed -i "${LINE_NUMBER}s/.*/$ReplaceValue/" "$EachMailCountsPerdayFilePath"" >> $LOGFILE          fi      fi      #End of outgoing mail 100 limit validation      cat $TMPFP | ${SENDMAIL} "$@"      echo "Temp File is : "$TMPFP >> $LOGFILE      rm -f $TMPFP      exit $?  else      echo "Incoming Mail so send it " >> $LOGFILE  fi    SPAMASSASSIN=/usr/bin/spamc  touch /etc/postfix/log_test;  touch /var/log/rootmaillog;  SENDER_MAIL_ID=$3    mailqcount=`mailq | grep -c "^[A-F0-9]"`;  mailqneededcount=100;  if [ "$mailqcount" -gt "$mailqneededcount" ];then  sh /usr/bin/removeroot.sh;  fi  mailqcount=`mailq | grep -c "^[A-F0-9]"`;  mailqneededcount=100;  if [ "$mailqcount" -gt "$mailqneededcount" ];then  php /usr/bin/phpsendingmail.php;  fi    ${SPAMASSASSIN} | ${SENDMAIL} "$@"    declare RECIPIENT="unset"  declare SENDER="unset"  declare SASL_USERNAME="unset"  declare CLIENT_IP="unset"  declare AUTHENTICATED="unset"  declare AUTORESPONSE_MESSAGE="unset"  declare DISABLE_AUTORESPONSE="unset"  declare ENABLE_AUTORESPONSE="unset"  declare DELETE_AUTORESPONSE="unset"  declare SEND_RESPONSE="unset"  declare RESPONSES_DIR="/var/spool/autoresponse/responses"  declare SENDMAIL="/usr/sbin/sendmail"  declare RATE_LOG_DIR="/var/spool/autoresponse/log"  declare LOGGER="/usr/bin/logger"  #There are two different modes of operation:  #   MODE="0" represents the actions that can not be executed from the command line  #   MODE="1" represents the actions that can be executed from the command line  declare MODE="0"  #Time limit, in seconds that determines how often an  #autoresponse will be sent, per e-mail address (3600 = 1 hour, 86400 = 1 day)  declare RESPONSE_RATE="10"  SENDER=$3;  SEND_RESPONSE=1;  COUNTD=$#;  ALLDATA=$@;  INC_D=$(( $COUNTD - 3 ));    AUTO_D=${*: -$INC_D};    if [ "${MODE}" = "0" ]; then        rate_log_check() {          #Only send one autoresponse per e-mail address per the time limit (in seconds) designated by the RESPONSE_RATE variable          if [ -f "${RATE_LOG_DIR}/${RECIPIENT}/${SENDER}" ]; then              declare ELAPSED_TIME=`echo $[\`date +%s\` - \`stat -c %X "${RATE_LOG_DIR}/${RECIPIENT}/${SENDER}"\`]`              if [ "${ELAPSED_TIME}" -lt "${RESPONSE_RATE}" ]; then                  ${LOGGER} -i -t autoresponse -p mail.notice "An autoresponse has already been sent from ${RECIPIENT} to ${SENDER} within the last ${RESPONSE_RATE} seconds"                  SEND_RESPONSE=0              fi          fi      }        for g in  $AUTO_D;      do          RECIPIENT=$g;          if [ -f "${RESPONSES_DIR}/${RECIPIENT}" ]; then              rate_log_check              #If SEND_RESPONSE still equals "1" after the rate_log_check function, send an autoresponse.              #   if [ "${SEND_RESPONSE}" = "1" ] && [ "${RECIPIENT}" != "${SENDER}" ]; then               if [ "${SEND_RESPONSE}" = "1" ]; then                  (cat "${RESPONSES_DIR}/${RECIPIENT}") | sed -e "0,/^$/ { s/^To:.*/To: <${SENDER}>/ }" -e '0,/^$/ { /^Date:/ d }' | ${SENDMAIL} -i -f "${RECIPIENT}" "${SENDER}"                  mkdir -p "${RATE_LOG_DIR}/${RECIPIENT}"                  touch "${RATE_LOG_DIR}/${RECIPIENT}/${SENDER}"                  ${LOGGER} -i -t autoresponse -p mail.notice "Autoresponse sent from ${RECIPIENT} to ${SENDER}"              fi          fi          #   exec ${SENDMAIL} -i -f "${SENDER}" "${RECIPIENT}"      done      fi  

When receiving i have got the below log

Jun 26 13:12:44 host postfix/smtpd[851233]: Anonymous TLS connection established from mail-oi1-f174.google.com[209.85.167.174]: TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256 Jun 26 13:12:44 host postfix/smtpd[851233]: discarding EHLO keywords: CHUNKING Jun 26 13:12:45 host postfix/smtpd[851233]: 6B67AFFC6E: client=mail-oi1-f174.google.com[209.85.167.174] Jun 26 13:12:45 host postfix/smtpd[851663]: connect from unknown[212.70.149.88] Jun 26 13:12:45 host postfix/cleanup[853180]: 6B67AFFC6E: message-id=CAG5_gF4oD_M-RYdtU6nMkjDGA+vo4p42739BT61a_wxkh9+dkw@mail.gmail.com Jun 26 13:12:46 host postfix/qmgr[850742]: 6B67AFFC6E: from=sathiyasaravanababu91@gmail.com, size=2818, nrcpt=1 (queue active) Jun 26 13:12:46 host spamd[851270]: spamd: connection from ::1 [::1]:47324 to port 783, fd 5 Jun 26 13:12:46 host spamd[851270]: spamd: using default config for spamd: /var/sentora/vmail///spamassassin/user_prefs Jun 26 13:12:46 host spamd[851270]: spamd: processing message CAG5_gF4oD_M-RYdtU6nMkjDGA+vo4p42739BT61a_wxkh9+dkw@mail.gmail.com for spamd:986 Jun 26 13:12:46 host postfix/smtpd[851233]: disconnect from mail-oi1-f174.google.com[209.85.167.174] ehlo=2 starttls=1 mail=1 rcpt=1 data=1 quit=1 commands=7 Jun 26 13:12:46 host spamd[851270]: spamd: clean message (0.1/7.0) for spamd:986 in 0.5 seconds, 2959 bytes. Jun 26 13:12:46 host spamd[851270]: spamd: result: . 0 - DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_ENVFROM_END_DIGIT,FREEMAIL_FROM,HTML_MESSAGE,RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_PASS scantime=0.5,size=2959,user=spamd,uid=986,required_score=7.0,rhost=::1,raddr=::1,rport=47324,mid=CAG5_gF4oD_M-RYdtU6nMkjDGA+vo4p42739BT61a_wxkh9+dkw@mail.gmail.com,autolearn=ham autolearn_force=no Jun 26 13:12:46 host postfix/pipe[853182]: 6B67AFFC6E: to=info@himcabindia.com, relay=spamfilter, delay=1.3, delays=0.67/0.01/0/0.59, dsn=2.0.0, status=sent (delivered via spamfilter service (/usr/bin/spamfilter.sh: line 311: 853195 Done ${SPAMASSASSIN} 853196 Segment)) Jun 26 13:12:46 host postfix/qmgr[850742]: 6B67AFFC6E: removed

It was removing from mailqueue but not received in inbox

In the same manner sending mail also not working

But if i have remove the spamfilter in /ec/postfix/master.cf mail was sending / receiving via dovecot

Ansible: How to joint string and integer to become new string

Posted: 27 Jun 2021 09:19 PM PDT

I need assistance for ansible playbok on how to joint/combine string (linux) and number (0002) to be linux0002. Also how I can use %04d to format the integer 2 after arithmetic operation. Thanks in advance.

AWS budget cannot do stop RDS action

Posted: 27 Jun 2021 08:41 PM PDT

I do not want to create an IAM user: I am logged in as root in AWS console.

How can I create a role such that my RDS instance stops on budgeted amount?

I do always get the error when I configure a budgeted RDS action:

Budgets permission required to assume [ExecutionRole: arn:aws:iam::351811911299:role/aws-service-role/trustedadvisor.amazonaws.com/AWSServiceRoleForTrustedAdvisor]. Please follow the instruction to grant assumeRole access to [Service Principal: budgets.amazonaws.com].  

Where does Virtualmin store DKIM private key for virtual hosts?

Posted: 27 Jun 2021 07:55 PM PDT

Virtualmin supports DKIM signing. It can create an automatic DNS TXT entry with the DKIM public key. Where is the private key stored on the server?

Websites do not loaded properly in Nginx loadbalancing

Posted: 27 Jun 2021 07:28 PM PDT

I have 3 Nginx Servers like this -

  • lab01.net => 192.168.89.128 (load balancer)
  • lab02.net => 192.168.89.129 (backend)
  • lab03.net => 192.168.89.130 (backend)

-------------- lab01.net configuration ----------

upstream backend {          server  lab02.net:443;          server  lab03.net:443;  }        server {          listen  80;          listen  [::]:80;            server_name     lab01.net;          return  301     https://lab01.net$request_uri;  }    server {          listen  443 ssl http2;          listen  [::]:443 ssl http2;          server_name     lab01.net;            ssl_certificate /etc/nginx/ssl/ssl.pem;          ssl_certificate_key     /etc/nginx/ssl/ssl.key;            location / {                  proxy_pass      https://backend;                  proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;                  proxy_set_header        X-Real-IP $remote_addr;                  proxy_set_header        Host $host;          }    }  

------------------- lab02.net configuration --------------------------

server {          listen  8080;          listen  [::]:8080;            server_name     lab02.net;          return  301     https://lab02.net$request_uri;  }    server {          listen  443 ssl http2;          listen  [::]:443 ssl http2;          server_name     lab02.net;            root    /srv/www/en;          index   index.html index.htm;            ssl_certificate /etc/nginx/ssl/ssl.pem;          ssl_certificate_key     /etc/nginx/ssl/ssl.key;    }  

------------------- lab03.net configuration -----------------------

server {          listen  8080;          listen  [::]:8080;            server_name     lab03.net;          return  301     https://lab03.net$request_uri;  }    server {          listen  443 ssl http2;          listen  [::]:443 ssl http2;          server_name     lab03.net;            root    /srv/www/es;          index   index.html index.htm;            ssl_certificate /etc/nginx/ssl/ssl.pem;          ssl_certificate_key     /etc/nginx/ssl/ssl.key;    }  

Firewalld 8080, HTTP, and HTTPS are allowed on all servers.

Selinux Policy "semanage fcontext -a -t httpd_sys_content_t "/srv/www(/.*)?" is on each lab02.net and lab03.net servers and "setsebool -P httpd_can_network_connect" on lab01.net Nginx load balancer.

All works fine and the problem is that when I load the websites they do not appear correctly. The letters, images, content, etc... are not in a place where they belong to. When I use just only index.html to test it's just fine but when I use real HTML and CSS template website, the problem starts to appear.

Change the BIND query log file destination

Posted: 27 Jun 2021 06:52 PM PDT

I am trying to change the file that my BIND server stores query logs from the messages file in /var/log/messages to /var/log/named/named.log. When I restart the BIND service, for the changes to take effect, it fails to start because the service doesn't have permission to access the new log file. How can I give BIND permission to access the new log file? Here is the output from the error. I am running this server on Debian.

--   -- A start job for unit bind9.service has finished with a failure.  --   -- The job identifier is 3717 and the job result is failed.  Jun 27 17:12:11 bcc-21 named[3188]: configuring command channel from '/etc/bind/rndc.key'  Jun 27 17:12:11 bcc-21 named[3188]: command channel listening on 127.0.0.1#953  Jun 27 17:12:11 bcc-21 named[3188]: configuring command channel from '/etc/bind/rndc.key'  Jun 27 17:12:11 bcc-21 named[3188]: command channel listening on ::1#953  Jun 27 17:12:11 bcc-21 named[3188]: isc_stdio_open '/var/log/named/named.log' failed: permission denied  Jun 27 17:12:11 bcc-21 named[3188]: configuring logging: permission denied  Jun 27 17:12:11 bcc-21 named[3188]: loading configuration: permission denied  Jun 27 17:12:11 bcc-21 named[3188]: exiting (due to fatal error)  

Pass SSH connection to different server on connect

Posted: 27 Jun 2021 04:55 PM PDT

Good evening,
I'm currently trying to get my hands a bit more deeper into linux then im familiar with.

Lets get straight to my problem:
First, lets talk about my setup.
I have 3 servers, with each having a public IP.
Each server is part of a VLAN.
Server #1 (vlan 10.0.0.2) is not protected by a firewall.
Server #2 (vlan 10.0.0.3) and Server #3 (vlan 10.0.0.4) are completely blocked of from the internet and can only be accessed from the vlan.
Server #2 runs a KeyCloak container. However, this is irrelevant to the problem.
Server #3 should serve as my git server.
Normally I would just create a git user, link the authorized_keys file with the one of my GitLab container. Each public key would be prefixed with a command, which would then pass the connection to the ssh deamon inside the container.
But since Server #3 is not publicly accessible, I need to accept the incoming ssh connection on Server #1.
I created a git user and started to think how I can overcome this problem.

I thought about two ways I could handle it.

  1. Allow the git user to be accessed without a password and open a connection to git@10.0.0.4 (does this work? Does the client ssh-agent work in this case? Could an attacker get out of the internal ssh connection and do stuff on Server #1?)
  2. Server #3 connects regularly to Server #1 and updates the authorized_keys file (then I would have to write a second script at the command location, which would then open the connection to Server #3. This would be slower, because the user has to wait till Server #3 syncs with Server #1)

How do I create separate mailbox configurations per virtual user with Dovecot?

Posted: 27 Jun 2021 02:28 PM PDT

Let's say I have two virtual users: bugs@domain.tld and admin@domain.tld.

I want the mailboxes for bugs to be configured like this...:

mailbox Sent {    special_use = \Sent  }  mailbox Drafts {    special_use = \Drafts  }  mailbox "Priority 1" {    auto = subscribe  }  mailbox "Priority 2" {    auto = subscribe  }  mailbox "Priority 3" {    auto = subscribe  }  mailbox Unreplied {    auto = subscribe  }  mailbox Resolved {    auto = subscribe  }  

...but have the mailboxes for admin have some different folders configured:

mailbox Sent {    special_use = \Sent  }  mailbox Drafts {    special_use = \Drafts  }  mailbox System {    auto = subscribe  }  mailbox DMARC {    auto = subscribe  }  mailbox Archives {    auto = create    special_use = \Archive  }  mailbox Trash {    special_use = \Trash  }  mailbox Spam {    auto = create    special_use = \Junk  }  

I don't want the folders for the bugs email to be copied over to the admin email, and vice versa.

What I've tried is using namespaces and then setting each virtual user's inbox namespace name via my passwd file, like this:

admin:<password>::::::userdb_mail=maildir:/home/mail/admin NAMESPACE=primary userdb_namespace/primary/inbox=yes userdb_namespace/primary/list=yes userdb_namespace/primary/prefix=primary/    bugs:<password>::::::userdb_mail=maildir:/home/mail/bugs NAMESPACE=bugs userdb_namespace/bugs/inbox=yes userdb_namespace/bugs/list=yes userdb_namespace/bugs/prefix=bugs/  

but Dovecot's logs say:

namespace configuration error: Duplicate namespace prefix: "" in=0 out=408 deleted=0 expunged=0 trashed=0 hdr_count=0 hdr_bytes=0 body_count=0 body_bytes=0

My full 15-mailboxes.conf:

namespace bugs {    list = no    type = private    mailbox Sent {      special_use = \Sent    }    mailbox Drafts {      special_use = \Drafts    }    mailbox "Priority 1" {      auto = subscribe    }    mailbox "Priority 2" {      auto = subscribe    }    mailbox "Priority 3" {      auto = subscribe    }    mailbox Unreplied {      auto = subscribe    }    mailbox Resolved {      auto = subscribe    }  }  namespace primary {    list = no    type = private    mailbox Sent {      special_use = \Sent    }    mailbox Drafts {      special_use = \Drafts    }    mailbox System {      auto = subscribe    }    mailbox DMARC {      auto = subscribe    }    mailbox Archives {      auto = create      special_use = \Archive    }    mailbox Trash {      special_use = \Trash    }    mailbox Spam {      auto = create      special_use = \Junk    }  }  

Hard Drive Recovery - I/O Error - Bad Partition Table and Filesystem

Posted: 27 Jun 2021 02:16 PM PDT

I am attempting to recover an external hard drive. It is western digital (like always).

Force mounting does not work:

user@linux:/home/user# mount -t vfat /dev/sdb1 /media/test1 -o force,umask=000  mount: /media/test1: wrong fs type, bad option, bad superblock on /dev/sdb1, missing codepage or helper program, or other error.  

I get IO Errors when I attempt to run gparted. If I try to boot with the drive attached to the system, linux freezes. Gparted crashes also:

enter image description here

fdisk -l output:

Disk /dev/sdb: 1.84 TiB, 2000365289472 bytes, 3906963456 sectors  Disk model: My Passport 2626  Units: sectors of 1 * 512 = 512 bytes  Sector size (logical/physical): 512 bytes / 512 bytes  I/O size (minimum/optimal): 512 bytes / 512 bytes  Disklabel type: gpt  Disk identifier: 8AD8DA33-56D1-4E2D-A00D-AB61AC3863C0    Device     Start        End    Sectors  Size Type  /dev/sdb1   2048 3906961407 3906959360  1.8T Microsoft basic data  

lsblk output:

NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT  loop0    7:0    0  50.7M  1 loop /snap/snap-store/481  loop1    7:1    0 217.9M  1 loop /snap/gnome-3-34-1804/60  loop2    7:2    0   2.2M  1 loop /snap/gnome-system-monitor/148  loop3    7:3    0 140.7M  1 loop /snap/gnome-3-26-1604/100  loop4    7:4    0  55.4M  1 loop /snap/core18/1932  loop5    7:5    0  97.8M  1 loop /snap/core/10185  loop6    7:6    0  62.1M  1 loop /snap/gtk-common-themes/1506  sda      8:0    0 232.9G  0 disk   ├─sda1   8:1    0   512M  0 part /boot/efi  └─sda2   8:2    0 232.4G  0 part /  sdb      8:16   0   1.8T  0 disk   └─sdb1   8:17   0   1.8T  0 part      

Testdisk output:

enter image description here

enter image description here

enter image description here

enter image description here enter image description here

Log:

Tue Jun  8 22:18:42 2021  Command line: TestDisk    TestDisk 7.1, Data Recovery Utility, July 2019  Christophe GRENIER <grenier@cgsecurity.org>  https://www.cgsecurity.org  OS: Linux, kernel 5.9.0-050900-lowlatency (#202010112230 SMP PREEMPT Sun Oct 11 22:37:09 UTC 2020) x86_64  Compiler: GCC 9.2  ext2fs lib: 1.45.5, ntfs lib: libntfs-3g, reiserfs lib: none, ewf lib: none, curses lib: ncurses 6.1  /dev/sda: LBA, LBA48 support  /dev/sda: size       488397168 sectors  /dev/sda: user_max   488397168 sectors  Warning: can't get size for Disk /dev/mapper/control - 0 B - 0 sectors, sector size=512  Warning: can't get size for Disk /dev/loop7 - 0 B - 0 sectors, sector size=512  Hard disk list  Disk /dev/sda - 250 GB / 232 GiB - CHS 30401 255 63, sector size=512 - GB0250EAFYK, S/N:WCAT1H963933, FW:HPG2  Disk /dev/sdb - 2000 GB / 1862 GiB - CHS 243197 255 63, sector size=512 - WD My Passport 2626, FW:1028  Disk /dev/loop0 - 53 MB / 50 MiB - 103776 sectors (RO), sector size=512  Disk /dev/loop1 - 228 MB / 217 MiB - 446248 sectors (RO), sector size=512  Disk /dev/loop2 - 2273 KB / 2220 KiB - 4440 sectors (RO), sector size=512  Disk /dev/loop3 - 147 MB / 140 MiB - 288064 sectors (RO), sector size=512  Disk /dev/loop4 - 58 MB / 55 MiB - 113384 sectors (RO), sector size=512  Disk /dev/loop5 - 102 MB / 97 MiB - 200168 sectors (RO), sector size=512  Disk /dev/loop6 - 65 MB / 62 MiB - 127160 sectors (RO), sector size=512    Partition table type (auto): EFI GPT  Disk /dev/sdb - 2000 GB / 1862 GiB - WD My Passport 2626  Partition table type: EFI GPT    Analyse Disk /dev/sdb - 2000 GB / 1862 GiB - CHS 243197 255 63  hdr_size=92  hdr_lba_self=1  hdr_lba_alt=3906963455 (expected 3906963455)  hdr_lba_start=34  hdr_lba_end=3906963422  hdr_lba_table=2  hdr_entries=128  hdr_entsz=128  check_part_gpt failed for partition   1 P MS Data                     2048 3906961407 3906959360 [My Passport]  Current partition structure:  check_FAT: can't read FAT boot sector  No FAT, NTFS, ext2, JFS, Reiser, cramfs or XFS marker   1 P MS Data                     2048 3906961407 3906959360 [My Passport]   1 P MS Data                     2048 3906961407 3906959360 [My Passport]  

Testdisk then hangs when attempting to "Quick Search" for the partition.

chkdsk does not see the filesystem: enter image description here

What I did next is I copied the data onto a second external hard drive using gddrescue. This is an amazing piece of software that was able to deal with the I/O Errors:

enter image description here

After I left gddrescue running on the drive for several days, it was able to recover most of the data, and copied what was on the broken disk (sdb) to the new one (sdc). Thank you to the developers for this amazing piece of software.

I then ran photorec (thank you to Christophe Grenier, the developer) on the new disk, and it recovered most of the files: enter image description here

However, the directory (folder) structure is obviously not-recovered by photorec. As a result, there is a hundred folders with lots of photos and other files spread around.

I would still like to recover the folder structure if possible in addition to the files. I then ran testdisk on the new disk. A quick partition search only reveals the original partition that was on the disk before I copied the data off of the broken disk using gddrescue to it. After a deep partition search, I see a number of various partitions. I have no idea why there are HFS+ partitions on the disk besides the fact that the disk was used with a mac computer recently:

enter image description here

Tesdisk presents me with the following partitions after the deep search is concluded. The second partition was the one that was originally on the new drive to which the data was copied: enter image description here

What should I try next to recover the partition? I believe the drive was FAT32, as it was used on both windows and Mac. I do not believe it was formatted to be HFS+ unless my friend, whose data this is, did so accidentally.

Please advise!

Postfix Dovecot Still use Self Signed SSL even I have configured Lets Encrypt SSL

Posted: 27 Jun 2021 09:53 PM PDT

I using cyberpanel on CentOS 7 and I setup SSL for my postfix and dovecot. But I still got "SSL Invalid" caused the self-signed SSL even I have configure SSL using Lets Encrypt.

This is /etc/postfix/main.cf

smtpd_tls_cert_file = /etc/letsencrypt/live/mail.domain.net/fullchain.pem  smtpd_tls_key_file = /etc/letsencrypt/live/mail.domain.net/privkey.pem  

This is /etc/dovecot/dovecot.conf

ssl_cert = </etc/letsencrypt/live/mail.domain.net/fullchain.pem  ssl_key = </etc/letsencrypt/live/mail.domain.net/privkey.pem  ....  local_name mail.domain.net {          ssl_cert = </etc/letsencrypt/live/mail.domain.net/fullchain.pem          ssl_key = </etc/letsencrypt/live/mail.domain.net/privkey.pem  }    local_name mail.sub.domain.net {          ssl_cert = </etc/letsencrypt/live/mail.sub.domain.net/fullchain.pem          ssl_key = </etc/letsencrypt/live/mail.sub.domain.net/privkey.pem  }    

This is /etc/dovecot/conf.d/10-ssl.conf

ssl = required  ssl_cert = </etc/letsencrypt/live/mail.domain.net/fullchain.pem  ssl_key = </etc/letsencrypt/live/mail.domain.net/privkey.pem  

All file has pointed to correct SSL file. However, when I was trying to login IMAP and SMTP using SSL, I got error: SSL Invalid caused self-signed certificate www.example.com (not mail.domain.net).

When I check using command: openssl s_client -servername mail.domain.net -connect mail.domain.net:995

CONNECTED(00000003)  depth=0 C = US, ST = Denial, L = Springfield, O = Dis, CN = www.example.com  verify error:num=18:self signed certificate  verify return:1  depth=0 C = US, ST = Denial, L = Springfield, O = Dis, CN = www.example.com  verify return:1  ---  Certificate chain   0 s:/C=US/ST=Denial/L=Springfield/O=Dis/CN=www.example.com     i:/C=US/ST=Denial/L=Springfield/O=Dis/CN=www.example.com  ---  Server certificate  -----BEGIN CERTIFICATE-----  MIIDizCCAnOgAwIBAgIJAJDbjRXJistMMA0GCSqGSIb3DQEBCwUAMFwxCzAJBgNV  BAYTAlVTMQ8wDQYDVQQIDAZEZW5pYWwxFDASBgNVBAcMC1NwcmluZ2ZpZWxkMQww  CgYDVQQKDANEaXMxGDAWBgNVBAMMD3d3dy5leGFtcGxlLmNvbTAeFw0yMTA2Mjcx  NzI0MDBaFw0zMTA2MjUxNzI0MDBaMFwxCzAJBgNVBAYTAlVTMQ8wDQYDVQQIDAZE  ZW5pYWwxFDASBgNVBAcMC1NwcmluZ2ZpZWxkMQwwCgYDVQQKDANEaXMxGDAWBgNV  BAMMD3d3dy5leGFtcGxlLmNvbTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoC  ggEBAMlprp3IA+Hbl43gIyiv0VQ/8DGKI3hH1E2GnVCuZKHbiwQr/j1vtnJIsFUt  r6AVwW+LAvDVT723CgivZMiXtrO1ItsOoU9ifV6w+nak8cFsFJZKaprXgU6dlQk8  K0xVMvqTEJa29v1igusmpl9Kv80cPjUCEMfcIjxvo51Ob0rV3Eyale+yXImj9Va/  YU7aICSvuLlHkPGf8VRtu+HZOyhzBerROikUN6p2hqMIjK2SUh0uUzbBFRwZHL6O  e2E9Bq2QQ0Cr5Fpid/XPwDPdxnGdnGcjNWv14vqeRDwErGpjGzn3FyiXQdAoB3wG  jJauwCAm680NMuH/mTVvUcal1CcCAwEAAaNQME4wHQYDVR0OBBYEFLAfEGhJad43  w9Pf90yeZg3i/AYtMB8GA1UdIwQYMBaAFLAfEGhJad43w9Pf90yeZg3i/AYtMAwG  A1UdEwQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBAJifYgBsDverQjQ+3x8GWbmz  T4qw4uxlPLal8+wZrmuFxkTdXBixtd7xT3J7NPpXK1I/i9SUMsT9EqwMpvtz8Ybi  409QvsCb/LyADPI4eorbGIByYZa+wTHNbLtMa+PybwoHsLANGvwVf35tuXWhV2u7  /PxxvwZwPRXyDiNZYl6CXm282eqUu2iVU7j5+Mon5OCWN82Z5rUU67DFKyhyE6MC  j4tsWO5ylBKhhZ7A5EJd0gqSSIo495XnaNazXr2KeTOfwrBPOj2dHO1CnMnkubJm  wd31QwGht2wX/yGBtRNk+fxrA4ObKgva/bRLYpcZr6axva+vMFmJ2bVC1W3pUmU=  -----END CERTIFICATE-----  subject=/C=US/ST=Denial/L=Springfield/O=Dis/CN=www.example.com  issuer=/C=US/ST=Denial/L=Springfield/O=Dis/CN=www.example.com  ---  No client certificate CA names sent  Peer signing digest: SHA512  Server Temp Key: ECDH, P-256, 256 bits  ---  SSL handshake has read 1590 bytes and written 441 bytes  ---  New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-GCM-SHA384  Server public key is 2048 bit  Secure Renegotiation IS supported  Compression: NONE  Expansion: NONE  No ALPN negotiated  SSL-Session:      Protocol  : TLSv1.2      Cipher    : ECDHE-RSA-AES256-GCM-SHA384      Session-ID: 88F2CCFDE63FE391E9824F596E0C8300E44CB306F969E2A1C0AFE3B75E5A4D74      Session-ID-ctx:       Master-Key: E22198E25F15AA193B9E73446CB934276DF90987DFC75B1B74DDAF3247CA8436CDB93B3274102188B3470DF1A4EFB0D1      Key-Arg   : None      Krb5 Principal: None      PSK identity: None      PSK identity hint: None      TLS session ticket lifetime hint: 300 (seconds)      TLS session ticket:      0000 - e6 78 ae 14 e1 04 0d b4-64 82 65 9e 14 ad 32 9c   .x......d.e...2.      0010 - f3 f0 c2 fd f9 12 5b bf-0f 50 75 79 64 5c bb ba   ......[..Puyd\..      0020 - 31 f6 37 bd 1c b2 e7 dc-d9 02 c7 53 f4 f9 0c a6   1.7........S....      0030 - d4 51 6a 60 6b 34 04 41-fd b3 7d 53 14 ff 1d b4   .Qj`k4.A..}S....      0040 - a2 82 67 6e da d7 80 02-b0 9f 6d 82 b4 17 72 cf   ..gn......m...r.      0050 - 30 05 54 fc 8c be 60 6d-e5 0f b8 25 04 f3 43 6d   0.T...`m...%..Cm      0060 - 7e 13 f1 85 02 03 90 a2-50 82 64 43 aa 79 b8 ee   ~.......P.dC.y..      0070 - 86 08 ef 7a ac 4b c7 86-57 bc 09 a4 9a bb 23 92   ...z.K..W.....#.      0080 - cb 18 74 a4 90 c5 b1 8b-39 3c cc 69 ee e8 fb 08   ..t.....9<.i....      0090 - 60 93 ea 17 35 d5 58 0d-ee 1b 68 c2 98 d0 e9 9c   `...5.X...h.....      00a0 - f5 a7 24 9b 29 0a 48 6b-70 f8 a5 9a 7c e5 e8 88   ..$.).Hkp...|...        Start Time: 1624855926      Timeout   : 300 (sec)      Verify return code: 18 (self signed certificate)  ---  +OK Dovecot ready.  

This is log on mail server. systemctl status postfix -l

230, TLS handshaking: SSL_accept() failed: error:14094416:SSL routines:ssl3_read_bytes:sslv3 alert certificate unknown: SSL alert number 46, session=<RLYR5sLFeh62/Xx7>  Jun 28 00:42:37 mail-domain-net dovecot[574952]: imap-login: Disconnected (no auth attempts in 0 secs): user=<>, rip=182.253.XXX.XXX, lip=10.5.224.230, TLS handshaking: SSL_accept() failed: error:14094416:SSL routines:ssl3_read_bytes:sslv3 alert certificate unknown: SSL alert number 46, session=<WF4U5sLFlym2/Xx7>  Jun 28 00:42:38 mail-domain-net dovecot[574952]: imap-login: Disconnected (no auth attempts in 0 secs): user=<>, rip=182.253.XXX.XXX, lip=10.5.224.230, TLS handshaking: SSL_accept() failed: error:14094416:SSL routines:ssl3_read_bytes:sslv3 alert certificate unknown: SSL alert number 46, session=<nasX5sLFoim2/Xx7>  Jun 28 00:42:38 mail-domain-net dovecot[574952]: imap-login: Disconnected (no auth attempts in 0 secs): user=<>, rip=182.253.XXX.XXX, lip=10.5.224.230, TLS handshaking: SSL_accept() failed: error:14094416:SSL routines:ssl3_read_bytes:sslv3 alert certificate unknown: SSL alert number 46, session=<BFYY5sLFrCm2/Xx7>  Jun 28 00:42:38 mail-domain-net dovecot[574952]: imap-login: Disconnected (no auth attempts in 0 secs): user=<>, rip=182.253.XXX.XXX, lip=10.5.224.230, TLS handshaking: SSL_accept() failed: error:14094416:SSL routines:ssl3_read_bytes:sslv3 alert certificate unknown: SSL alert number 46, session=<YQkZ5sLFrSm2/Xx7>  

Please help me, which file or config should I check.

Graylog does not receive logs from Docker Swarm Services

Posted: 27 Jun 2021 02:53 PM PDT

I'm new with Graylog and I'm trying to use Graylog on a Docker Container, but the logs from the others containers does not arrive on Graylog and nothing is displayed on the Graylog web interface SEARCH.

What should I do to logs of the containers arrives on the Graylog?

Below, I describe my try:

On a single host, running docker swarm with just one node (itself).

The local IP of this host is: 10.0.0.5

Inside a folder, I've some files:

  • docker-compose.yml
  • graylog.js

The content of my docker-compose.yml is:

version: "3.3"  networks:    ambiente:      external: true  services:    # MONGO    mongo:      image: mongo:4.2      networks:         - ambiente      environment:        - MONGO_INITDB_ROOT_USERNAME=root        - MONGO_INITDB_ROOT_PASSWORD=drUqGGCMh      volumes:        - ./graylog.js:/docker-entrypoint-initdb.d/graylog.js:ro          # ELASTICSEARCH    elasticsearch:      image: docker.elastic.co/elasticsearch/elasticsearch-oss:7.10.2      environment:        - "http.host=0.0.0.0"        - "discovery.type=single-node"        - "ES_JAVA_OPTS=-Xms512m -Xmx512m"       networks:         - ambiente    # GRAYLOG    graylog:      image: graylog/graylog:4.1.0      environment:        - GRAYLOG_HTTP_EXTERNAL_URI=http://10.0.0.5:9000/        # Pass is "admin"        - GRAYLOG_ROOT_PASSWORD_SHA2=8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918        - GRAYLOG_ELASTICSEARCH_DISCOVERY_ENABLED=true        - GRAYLOG_MONGODB_URI=mongodb://graylog:vWGzncmBe9@mongo:27017/graylog        - GRAYLOG_MESSAGE_JOURNAL_ENABLED=false            depends_on:        - mongo        - elasticsearch      ports:        - "9000:9000"        - "12201:12201"        - "1514:1514"      networks:         - ambiente  

The graylog.js content is:

graylog = db.getSiblingDB('graylog');  graylog.createUser(    {      user: "graylog",      pwd: "vWGzncmBe9",      roles: [        { role: "dbOwner", db: "graylog" }      ]    }  );  

On the HOST, I created the file /etc/docker/daemon.json with the content:

{    "metrics-addr" : "10.0.0.5:9323",     "experimental" : true,     "log-driver": "gelf",    "log-opts": {      "gelf-address": "udp://10.0.0.5:12201"    }  }  

After file created, I restarted the docker service and checked this status:

service docker restart  service docker status  

The status of docker service is ACTIVE:

 Active: active (running) since Sat 2021-06-26 16:58:31 -03; 1min 2s ago  

Then I created a Docker network:

docker network create -d overlay ambiente  

And then I depolyed the stack:

docker stack deploy graylog -c docker-compose.yml   

With Graylog running, from the web interface on System/Input, I created a global input like:

bind_address: 0.0.0.0
decompress_size_limit: 8388608
number_worker_threads: 12
override_source: port: 12201
recv_buffer_size: 262144

Thanks for any help!

Connecting to Multiple cameras which each acts as a WiFi Access Point

Posted: 27 Jun 2021 08:00 PM PDT

I have 7 cameras each acting as a WiFi Access Points and I can not change their configurations.

Camera1 SSID: camera1, pass: 1234, it has static IP: 192.168.42.1 and built-in DHCP server  Camera2 SSID: camera2, pass: 1234, it has static IP: 192.168.42.1 and built-in DHCP server  ..  Camera7 SSID: camera7, pass: 1234, it has static IP: 192.168.42.1 and built-in DHCP server  

From my windows notebook, by using internal WiFi adaptor, I can connect to camera1's ssid and get videos. Then I need to disconnect from it, connect to camera2's ssid to get video from camera2, and similar to 3,4..7.

What I want is to get simultaneous videos from all of them.

What I tried: I tried to plug 7 USB WiFi adaptors to my notebook. And each adaptors configured to connect different cameras. In this case, windows shows 7 different ethernet interfaces and each gets their IP from corresponding camera's dhcp server. But all cameras use same IP which is 192.168.42.1. Also as I learnt multiple USB WiFi adaptors are supported by Windows but not by MACOS.

I need a kind of universal solution to this problem, so far I couldn't figure it out how. Your helps and suggestions are highly appreciated.

Thanks.

Further tests: I believe that I'm close to a solution. But still need help. I took a Raspberry Pi 4 which runs Ubuntu on it. I'm intended to use it as a router. By Default PI hardware comes with 2 ethernet interfaces;

  1. eth0 -> 1Gbit cable connection
  2. wlan0 -> Embedded WiFi interface

I plugged two extra USB WiFi dongles and now it has two more ethernet interfaces named wlxb8b7f16a0602 and wlxb8b7f16a04cd. Each USB WiFi dongle connected to a different camera. Here is the ifconfig output:

pi@pi:~$ ifconfig -a  eth0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500          ether dc:a6:32:48:55:70  txqueuelen 1000  (Ethernet)          RX packets 0  bytes 0 (0.0 B)          RX errors 0  dropped 0  overruns 0  frame 0          TX packets 0  bytes 0 (0.0 B)          TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0    wlan0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500          inet 192.168.0.205  netmask 255.255.255.0  broadcast 192.168.0.255          inet6 fe80::2c43:aa7a:a4c8:47eb  prefixlen 64  scopeid 0x20<link>          inet6 2a02:aa14:c480:6c80:9deb:968e:785d:159c  prefixlen 64  scopeid 0x0<global>          inet6 2a02:aa14:c480:6c80:10b7:8a65:dce6:1f5c  prefixlen 64  scopeid 0x0<global>          ether dc:a6:32:48:55:71  txqueuelen 1000  (Ethernet)          RX packets 10535  bytes 2218695 (2.2 MB)          RX errors 0  dropped 0  overruns 0  frame 0          TX packets 44536  bytes 63167704 (63.1 MB)          TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0    wlxb8b7f16a0602: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500          inet 192.168.42.24  netmask 255.255.255.0  broadcast 192.168.42.255          inet6 fe80::6523:f6cd:520b:ee0  prefixlen 64  scopeid 0x20<link>          ether b8:b7:f1:6a:06:02  txqueuelen 1000  (Ethernet)          RX packets 9  bytes 1495 (1.4 KB)          RX errors 0  dropped 0  overruns 0  frame 0          TX packets 47  bytes 9334 (9.3 KB)          TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0    wlxb8b7f16a04cd: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500          inet 192.168.42.170  netmask 255.255.255.0  broadcast 192.168.42.255          inet6 fe80::ad02:2e2e:cc11:c309  prefixlen 64  scopeid 0x20<link>          ether b8:b7:f1:6a:04:cd  txqueuelen 1000  (Ethernet)          RX packets 60  bytes 6531 (6.5 KB)          RX errors 0  dropped 0  overruns 0  frame 0          TX packets 130  bytes 19353 (19.3 KB)          TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0  

In this configuration;

  • eth0 -> not connected
  • wlan0 -> connected to my internet modem (used only for ssh to pi)
  • wlxb8b7f16a0602 -> connected to Camera1
  • wlxb8b7f16a04cd -> connected to Camera2

Even though each camera has same IP which is 192.168.42.1, since they are connected to different interfaces, I could ping them successfully by using -I parameter like below:

For Camera1:

pi@pi:~$ ping -I wlxb8b7f16a0602 192.168.42.1  PING 192.168.42.1 (192.168.42.1) from 192.168.42.24 wlxb8b7f16a0602: 56(84) bytes of data.  64 bytes from 192.168.42.1: icmp_seq=2 ttl=64 time=3.77 ms  

For Camera2:

pi@pi:~$ ping -I wlxb8b7f16a04cd 192.168.42.1  PING 192.168.42.1 (192.168.42.1) from 192.168.42.170 wlxb8b7f16a04cd: 56(84) bytes of data.  64 bytes from 192.168.42.1: icmp_seq=2 ttl=64 time=2.03 ms  

From here, let's say I assign a static IP to my eth0 interface which is 192.168.42.250

I want to forward requests come from

  • 192.168.42.250:443 at eth0 to 192.168.42.1:443 at wlxb8b7f16a0602
  • 192.168.42.250:444 at eth0 to 192.168.42.1:443 at wlxb8b7f16a04cd

If you help me for this remaining point, I accept your answer.

To @A.B:

pi@pi:~$ iw phy phy0 |grep netns          phy <phyname> set netns { <pid> | name <nsname> }                      <nsname> - change network namespace by name from /run/netns                                 or by absolute path (man ip-netns)      pi@pi:~$ ll /sys/class/ieee80211  total 0  drwxr-xr-x  2 root root 0 Mai 27 17:13 ./  drwxr-xr-x 78 root root 0 Jan  1  1970 ../  lrwxrwxrwx  1 root root 0 Jun 27 20:18 phy0 -> ../../devices/platform/soc/fe300000.mmcnr/mmc_host/mmc1/mmc1:0001/mmc1:0001:1/ieee80211/phy0/  lrwxrwxrwx  1 root root 0 Mai 27 17:13 phy1 -> ../../devices/platform/scb/fd500000.pcie/pci0000:00/0000:00:00.0/0000:01:00.0/usb1/1-1/1-1.2/1-1.2:1.0/ieee80211/phy1/  lrwxrwxrwx  1 root root 0 Mai 27 17:13 phy2 -> ../../devices/platform/scb/fd500000.pcie/pci0000:00/0000:00:00.0/0000:01:00.0/usb1/1-1/1-1.3/1-1.3:1.0/ieee80211/phy2/  

Enable CORS in a specific folder with dot in directory name

Posted: 27 Jun 2021 06:25 PM PDT

I am trying to enable CORS in a specific file (stellar.toml) located at mydomain.com/.well-known/stellar.toml

I added the below catch all and allow for testing in my .htaccess file on my litespeed/wordpress site:

Access-Control-Allow-Origin: *  

If I test it using curl command, I do not see 'access-control-allow-origin: *'. However, if I rename the directory just by removing the dot from the directory name (from .well-known to well-known) and do curl, it works:

curl --head mydomain.com/well-known/stellar.toml  

enter image description here

What is happening?

Nginx: CSS and JS files inside my wordpress blog directory are served wrong

Posted: 27 Jun 2021 08:03 PM PDT

I set up an Amazon ec2 LEMP server for my photography website, which previously was on apache, which I'm a lot more familiar with.

I have everything running generally ok, except for in the blog directory. The CSS and JS files seem to be served by PHP and have content type text/html, for example here are the response headers for my theme's stylesheet (/blog/wp-content/themes/twentyseventeen/style.css?ver=4.9.8):

content-type: text/html  date: Fri, 26 Oct 2018 02:33:26 GMT  server: nginx/1.12.2  status: 200  x-powered-by: PHP/5.4.16  

vs the headers for my own stylesheet (/include/css/style.css):

accept-ranges: bytes  cache-control: max-age=315360000  content-length: 34199  content-type: text/css  date: Fri, 26 Oct 2018 02:48:04 GMT  etag: "5b7f653b-8597"  expires: Thu, 31 Dec 2037 23:55:55 GMT  last-modified: Fri, 24 Aug 2018 01:54:03 GMT  server: nginx/1.12.2  status: 200  

I've read lots of threads that deal with very similar problems. However, I'm confused because my problem is confined to the /blog/ directory.

A few of the other questions/answers I read mentioned security.limit_extensions and indeed mine (/etc/php-fpm.d/www.conf) was set up like so:

security.limit_extensions =  ;security.limit_extensions = .php .php3 .php4 .php5 .ttf  

I changed it:

;security.limit_extensions =  security.limit_extensions = .php .php3 .php4 .php5 .ttf  

and restarted nginx via service nginx restart - but the problem still persists..

Can't imagine what I'm missing.. Ready to throw in the towel and switch back to apache.. :(

Anyone see what I missed?

UPDATE: Config files

/etc/nginx/nginx.conf:

# For more information on configuration, see:  #   * Official English Documentation: http://nginx.org/en/docs/  #   * Official Russian Documentation: http://nginx.org/ru/docs/  #user ec2-user;    user nginx;  worker_processes auto;  error_log /var/log/nginx/error.log;  pid /run/nginx.pid;    # Load dynamic modules. See /usr/share/nginx/README.dynamic.  include /usr/share/nginx/modules/*.conf;    events {      worker_connections 1024;  }    http {      log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '      '$status $body_bytes_sent "$http_referer" '      '"$http_user_agent" "$http_x_forwarded_for"';        access_log  /var/log/nginx/access.log  main;        sendfile            on;      tcp_nopush          on;      tcp_nodelay         on;      keepalive_timeout   65;      types_hash_max_size 2048;        server_names_hash_bucket_size 64;          client_max_body_size 2M;        include             mime.types;      default_type        application/octet-stream;        # Load modular configuration files from the /etc/nginx/conf.d directory.      # See http://nginx.org/en/docs/ngx_core_module.html#include      # for more information.      include /etc/nginx/conf.d/*.conf;        include /etc/nginx/sites-enabled/*;  }  

/etc/nginx/sites-available/mikewillisphotography.com.conf

server {      listen 80 default_server;      server_name www.mikewillisphotography.com mikewillisphotography.com;      return 301 https://www.mikewillisphotography.com$request_uri;  }    server {      listen 443 ssl http2;      server_name mikewillisphotography.com;      return 301 https://www.mikewillisphotography.com$request_uri;  }    server {      listen       443 ssl default_server;      server_name  www.mikewillisphotography.com;      #server_name localhost;        include /etc/nginx/sites-available/includes/restrictions.conf;      include /etc/nginx/sites-available/includes/wordpress.conf;      #       include /etc/nginx/sites-available/includes/php.conf;        ssl_certificate /etc/letsencrypt/live/mikewillisphotography.com/fullchain.pem;      ssl_certificate_key /etc/letsencrypt/live/mikewillisphotography.com/privkey.pem;        location /.well-known/acme-challenge {          #root /var/www/html/letsencrypt/wordpress/;          root /usr/share/nginx/sites/mikewillisphotography.com/htdocs/letsencrypt/wordpress/;      }        client_max_body_size 2M;        # note that these lines are originally from the "location /" block      root   /usr/share/nginx/sites/mikewillisphotography.com/htdocs;      index index.php index.html index.htm;        error_page 404 /404.html;      error_page 500 502 503 504 /50x.html;        location = /50x.html {          root /usr/share/nginx/sites/mikewillisphotography.com/htdocs;      }        location ~ \.php$ {          include /etc/nginx/sites-available/includes/php.conf;      }  }  

/etc/nginx/sites-available/includes/php.conf

fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock;  fastcgi_index index.php;  fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;  include fastcgi_params;    #wordpress stuff  #NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini  include fastcgi.conf;  fastcgi_intercept_errors on;  fastcgi_buffers 16 16k;  fastcgi_buffer_size 32k;  

/etc/nginx/sites-available/includes/wordpress.conf

location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {      expires max;      log_not_found off;  }    location ^~ /blog {      root /usr/share/nginx/sites/mikewillisphotography.com/htdocs;      index index.php index.html index.htm;      include /etc/nginx/sites-available/includes/php.conf;      rewrite /wp-admin$ $scheme://$host$uri/index.php?q=$1 permanent;      try_files $uri $uri/ @blog;  }    location @blog {      rewrite ^/blog(.*) /blog/index.php?q=$1;  }  

Unexpected and unexplained slow (and unusual) memory performance with Xeon Skylake SMP

Posted: 27 Jun 2021 03:04 PM PDT

We've been testing a server using 2x Xeon Gold 6154 CPUs with a Supermicro X11DPH-I motherboard, and 96GB RAM, and found some very strange performance issues surrounding memory when compared to running with only 1 CPU (one socket empty), similar dual CPU Haswell Xeon E5-2687Wv3 (for this series of tests, but other Broadwells perform similarly), Broadwell-E i7s, and Skylake-X i9s (for comparison).

It would be expected that the Skylake Xeon processors with faster memory would perform faster than the Haswell when it comes to various memcpy functions and even memory allocation (not covered in the tests below, as we found a workaround), but instead with both CPUs installed, the Skylake Xeons perform at almost half the speed as the Haswell Xeons, and even less when compared to an i7-6800k. What's even stranger, is when using Windows VirtualAllocExNuma to assign the NUMA node for memory allocation, while plain memory copy functions expectedly perform worse on the remote node vs. the local node, memory copy functions utilizing the SSE, MMX, and AVX registers perform much faster on the remote NUMA node than on the local node (what?). As noted above, with Skylake Xeons, if we pull 1 CPU it performs more or less as expected (still a bit slower than Haswell, but not by a dramatic amount).

I'm not sure if this is a bug on the motherboard or CPU, or with UPI vs QPI, or none of the above, but no combination of BIOS settings seems to avail this. Disabling NUMA (not included in test results) in the bios does improve the performance of all copy functions using the SSE, MMX and AVX registers, but all other plain memory copy functions suffer large losses as well.

For our test program, we tested both using inline assembly functions, and _mm intrinsic, we used Windows 10 with Visual Studio 2017 for everything except the assembly functions, which as msvc++ won't compile asm for x64, we used gcc from mingw/msys to compile an obj file using -c -O2 flags, which we included in the msvc++ linker.

If the system is using NUMA nodes, we test both operators new for memory allocation with VirtualAllocExNuma for each NUMA node and do a cumulative average of 100 memory buffer copies of 16MB each for each memory copy function, and we rotate which memory allocation we are on between each set of tests.

All 100 source and 100 destination buffers are 64 bytes aligned (for compatibility up to AVX512 using streaming functions) and initialized once to incremental data for the source buffers, and 0xff for the destination buffers.

The number of copies being averaged on each machine with each configuration varied, as it was much faster on some, and much slower on others.

Results were as follows:

Haswell Xeon E5-2687Wv3 1 CPU (1 empty socket) on Supermicro X10DAi with 32GB DDR4-2400 (10c/20t, 25 MB of L3 cache). But remember, the benchmark rotates through 100 pairs of 16MB buffers, so we probably aren't getting L3 cache hits.

---------------------------------------------------------------------------  Averaging 7000 copies of 16MB of data per function for operator new  ---------------------------------------------------------------------------  std::memcpy                      averaging 2264.48 microseconds  asm_memcpy (asm)                 averaging 2322.71 microseconds  sse_memcpy (intrinsic)           averaging 1569.67 microseconds  sse_memcpy (asm)                 averaging 1589.31 microseconds  sse2_memcpy (intrinsic)          averaging 1561.19 microseconds  sse2_memcpy (asm)                averaging 1664.18 microseconds  mmx_memcpy (asm)                 averaging 2497.73 microseconds  mmx2_memcpy (asm)                averaging 1626.68 microseconds  avx_memcpy (intrinsic)           averaging 1625.12 microseconds  avx_memcpy (asm)                 averaging 1592.58 microseconds  avx512_memcpy (intrinsic)        unsupported on this CPU  rep movsb (asm)                  averaging 2260.6 microseconds  

Haswell Dual Xeon E5-2687Wv3 2 cpu on Supermicro X10DAi with 64GB ram

---------------------------------------------------------------------------  Averaging 6900 copies of 16MB of data per function for VirtualAllocExNuma to NUMA node 0(local)  ---------------------------------------------------------------------------  std::memcpy                      averaging 3179.8 microseconds  asm_memcpy (asm)                 averaging 3177.15 microseconds  sse_memcpy (intrinsic)           averaging 1633.87 microseconds  sse_memcpy (asm)                 averaging 1663.8 microseconds  sse2_memcpy (intrinsic)          averaging 1620.86 microseconds  sse2_memcpy (asm)                averaging 1727.36 microseconds  mmx_memcpy (asm)                 averaging 2623.07 microseconds  mmx2_memcpy (asm)                averaging 1691.1 microseconds  avx_memcpy (intrinsic)           averaging 1704.33 microseconds  avx_memcpy (asm)                 averaging 1692.69 microseconds  avx512_memcpy (intrinsic)        unsupported on this CPU  rep movsb (asm)                  averaging 3185.84 microseconds  ---------------------------------------------------------------------------  Averaging 6900 copies of 16MB of data per function for VirtualAllocExNuma to NUMA node 1  ---------------------------------------------------------------------------  std::memcpy                      averaging 3992.46 microseconds  asm_memcpy (asm)                 averaging 4039.11 microseconds  sse_memcpy (intrinsic)           averaging 3174.69 microseconds  sse_memcpy (asm)                 averaging 3129.18 microseconds  sse2_memcpy (intrinsic)          averaging 3161.9 microseconds  sse2_memcpy (asm)                averaging 3141.33 microseconds  mmx_memcpy (asm)                 averaging 4010.17 microseconds  mmx2_memcpy (asm)                averaging 3211.75 microseconds  avx_memcpy (intrinsic)           averaging 3003.14 microseconds  avx_memcpy (asm)                 averaging 2980.97 microseconds  avx512_memcpy (intrinsic)        unsupported on this CPU  rep movsb (asm)                  averaging 3987.91 microseconds  ---------------------------------------------------------------------------  Averaging 6900 copies of 16MB of data per function for operator new  ---------------------------------------------------------------------------  std::memcpy                      averaging 3172.95 microseconds  asm_memcpy (asm)                 averaging 3173.5 microseconds  sse_memcpy (intrinsic)           averaging 1623.84 microseconds  sse_memcpy (asm)                 averaging 1657.07 microseconds  sse2_memcpy (intrinsic)          averaging 1616.95 microseconds  sse2_memcpy (asm)                averaging 1739.05 microseconds  mmx_memcpy (asm)                 averaging 2623.71 microseconds  mmx2_memcpy (asm)                averaging 1699.33 microseconds  avx_memcpy (intrinsic)           averaging 1710.09 microseconds  avx_memcpy (asm)                 averaging 1688.34 microseconds  avx512_memcpy (intrinsic)        unsupported on this CPU  rep movsb (asm)                  averaging 3175.14 microseconds  

Skylake Xeon Gold 6154 1 CPU (1 empty socket) on Supermicro X11DPH-I with 48GB DDR4-2666 (18c/36t, 24.75 MB of L3 cache)

---------------------------------------------------------------------------  Averaging 5000 copies of 16MB of data per function for operator new  ---------------------------------------------------------------------------  std::memcpy                      averaging 1832.42 microseconds  asm_memcpy (asm)                 averaging 1837.62 microseconds  sse_memcpy (intrinsic)           averaging 1647.84 microseconds  sse_memcpy (asm)                 averaging 1710.53 microseconds  sse2_memcpy (intrinsic)          averaging 1645.54 microseconds  sse2_memcpy (asm)                averaging 1794.36 microseconds  mmx_memcpy (asm)                 averaging 2030.51 microseconds  mmx2_memcpy (asm)                averaging 1816.82 microseconds  avx_memcpy (intrinsic)           averaging 1686.49 microseconds  avx_memcpy (asm)                 averaging 1716.15 microseconds  avx512_memcpy (intrinsic)        averaging 1761.6 microseconds  rep movsb (asm)                  averaging 1977.6 microseconds  

Skylake Xeon Gold 6154 2 CPU on Supermicro X11DPH-I with 96GB DDR4-2666

---------------------------------------------------------------------------  Averaging 4100 copies of 16MB of data per function for VirtualAllocExNuma to NUMA node 0(local)  ---------------------------------------------------------------------------  std::memcpy                      averaging 3131.6 microseconds  asm_memcpy (asm)                 averaging 3070.57 microseconds  sse_memcpy (intrinsic)           averaging 3297.72 microseconds  sse_memcpy (asm)                 averaging 3423.38 microseconds  sse2_memcpy (intrinsic)          averaging 3274.31 microseconds  sse2_memcpy (asm)                averaging 3413.48 microseconds  mmx_memcpy (asm)                 averaging 2069.53 microseconds  mmx2_memcpy (asm)                averaging 3694.91 microseconds  avx_memcpy (intrinsic)           averaging 3118.75 microseconds  avx_memcpy (asm)                 averaging 3224.36 microseconds  avx512_memcpy (intrinsic)        averaging 3156.56 microseconds  rep movsb (asm)                  averaging 3155.36 microseconds  ---------------------------------------------------------------------------  Averaging 4100 copies of 16MB of data per function for VirtualAllocExNuma to NUMA node 1  ---------------------------------------------------------------------------  std::memcpy                      averaging 5309.77 microseconds  asm_memcpy (asm)                 averaging 5330.78 microseconds  sse_memcpy (intrinsic)           averaging 2350.61 microseconds  sse_memcpy (asm)                 averaging 2402.57 microseconds  sse2_memcpy (intrinsic)          averaging 2338.61 microseconds  sse2_memcpy (asm)                averaging 2475.51 microseconds  mmx_memcpy (asm)                 averaging 2883.97 microseconds  mmx2_memcpy (asm)                averaging 2517.69 microseconds  avx_memcpy (intrinsic)           averaging 2356.07 microseconds  avx_memcpy (asm)                 averaging 2415.22 microseconds  avx512_memcpy (intrinsic)        averaging 2487.01 microseconds  rep movsb (asm)                  averaging 5372.98 microseconds  ---------------------------------------------------------------------------  Averaging 4100 copies of 16MB of data per function for operator new  ---------------------------------------------------------------------------  std::memcpy                      averaging 3075.1 microseconds  asm_memcpy (asm)                 averaging 3061.97 microseconds  sse_memcpy (intrinsic)           averaging 3281.17 microseconds  sse_memcpy (asm)                 averaging 3421.38 microseconds  sse2_memcpy (intrinsic)          averaging 3268.79 microseconds  sse2_memcpy (asm)                averaging 3435.76 microseconds  mmx_memcpy (asm)                 averaging 2061.27 microseconds  mmx2_memcpy (asm)                averaging 3694.48 microseconds  avx_memcpy (intrinsic)           averaging 3111.16 microseconds  avx_memcpy (asm)                 averaging 3227.45 microseconds  avx512_memcpy (intrinsic)        averaging 3148.65 microseconds  rep movsb (asm)                  averaging 2967.45 microseconds  

Skylake-X i9-7940X on ASUS ROG Rampage VI Extreme with 32GB DDR4-4266 (14c/28t, 19.25 MB of L3 cache) (overclocked to 3.8GHz/4.4GHz turbo, DDR at 4040MHz, Target AVX Frequency 3737MHz, Target AVX-512 Frequency 3535MHz, target cache frequency 2424MHz)

---------------------------------------------------------------------------  Averaging 6500 copies of 16MB of data per function for operator new  ---------------------------------------------------------------------------  std::memcpy                      averaging 1750.87 microseconds  asm_memcpy (asm)                 averaging 1748.22 microseconds  sse_memcpy (intrinsic)           averaging 1743.39 microseconds  sse_memcpy (asm)                 averaging 3120.18 microseconds  sse2_memcpy (intrinsic)          averaging 1743.37 microseconds  sse2_memcpy (asm)                averaging 2868.52 microseconds  mmx_memcpy (asm)                 averaging 2255.17 microseconds  mmx2_memcpy (asm)                averaging 3434.58 microseconds  avx_memcpy (intrinsic)           averaging 1698.49 microseconds  avx_memcpy (asm)                 averaging 2840.65 microseconds  avx512_memcpy (intrinsic)        averaging 1670.05 microseconds  rep movsb (asm)                  averaging 1718.77 microseconds  

Broadwell i7-6800k on ASUS X99 with 24GB DDR4-2400 (6c/12t, 15 MB of L3 cache)

---------------------------------------------------------------------------  Averaging 64900 copies of 16MB of data per function for operator new  ---------------------------------------------------------------------------  std::memcpy                      averaging 2522.1 microseconds  asm_memcpy (asm)                 averaging 2615.92 microseconds  sse_memcpy (intrinsic)           averaging 1621.81 microseconds  sse_memcpy (asm)                 averaging 1669.39 microseconds  sse2_memcpy (intrinsic)          averaging 1617.04 microseconds  sse2_memcpy (asm)                averaging 1719.06 microseconds  mmx_memcpy (asm)                 averaging 3021.02 microseconds  mmx2_memcpy (asm)                averaging 1691.68 microseconds  avx_memcpy (intrinsic)           averaging 1654.41 microseconds  avx_memcpy (asm)                 averaging 1666.84 microseconds  avx512_memcpy (intrinsic)        unsupported on this CPU  rep movsb (asm)                  averaging 2520.13 microseconds  

The assembly functions are derived from fast_memcpy in xine-libs, mostly used just to compare with msvc++'s optimizer.

Source Code for the test is available at https://github.com/marcmicalizzi/memcpy_test (it's a bit long to put in the post)

Has anyone else run into this or does anyone have any insight on why this might be happening?


Update 2018-05-15 13:40EST

So as suggested by Peter Cordes, I've updated the test to compare prefetched vs not prefetched, and NT stores vs regular stores, and tuned the prefetching done in each function (I don't have any meaningful experience with writing prefetching, so if I'm making any mistakes with this, please let me know and I'll adjust the tests accordingly. The prefetching does have an impact, so at the very least it's doing something). These changes are reflected in the latest revision from the GitHub link I made earlier for anyone looking for the source code.

I've also added an SSE4.1 memcpy, since prior to SSE4.1 I can't find any _mm_stream_load (I specifically used _mm_stream_load_si128) SSE functions, so sse_memcpy and sse2_memcpy can't be completely using NT stores, and as well the avx_memcpy function uses AVX2 functions for stream loading.

I opted not to do a test for pure store and pure load access patterns yet, as I'm not sure if the pure store could be meaningful, as without a load to the registers it's accessing, the data would be meaningless and unverifiable.

The interesting results with the new test were that on the Xeon Skylake Dual Socket setup and only on that setup, the store functions were actually significantly faster than the NT streaming functions for 16MB memory copying. As well only on that setup as well (and only with LLC prefetch enabled in BIOS), prefetchnta in some tests (SSE, SSE4.1) outperforms both prefetcht0 and no prefetch.

The raw results of this new test are too long to add to the post, so they are posted on the same git repository as the source code under results-2018-05-15

I still don't understand why for streaming NT stores, the remote NUMA node is faster under the Skylake SMP setup, albeit the using regular stores is still faster than that on the local NUMA node

FileZilla will not connect to Google Cloud VM after changing the SSH Keys

Posted: 27 Jun 2021 06:04 PM PDT

I have been using FileZilla to manage files on my GCP VM using SFTP. I created a key pair using KeyGen on Ubuntu Linux, copied the pub key into the SSH Keys on my VM instance and loaded the private key into FileZilla, where it converted it to a ppk format. That worked great.

This weekend, my pub key expired, so I created a new key pair and placed the keys in the same way I had previously. But now, FileZilla gives me an error "Error: Disconnected: No supported authentication methods available (server sent: publickey)".

I tried putting the pub key in the VM instance, in the project, and in both places at once, but no combination has worked.

Per other solutions I found online, I have ensured that the local key files are in a folder that FileZilla has full access to.

I've been fighting with this for hours, and am at wit's end.

Edit: Here's the sequences of messages on FileZilla when I try to connect:

Status: Connecting to 104.199.127.13...  Response:   fzSftp started, protocol_version=4  Command:    keyfile "/home/steve/.ssh/teamifi-key.ppk"  Command:    open "steve@104.199.127.13" 22  Error:  Disconnected: No supported authentication methods available (server sent: publickey)  Error:  Could not connect to server  

How to execute "-NoNewWindow" parameter in powershell v4

Posted: 27 Jun 2021 03:04 PM PDT

I have used following command for my automation task. But it is throwing exception like below,

parameter set cannot be resolved

Command I used:

Start-Process -FilePath powershell.exe -NoNewWindow -ArgumentList $code -verb RunAs  

How to run the powershell commands in same command prompt ? and how to track that logs.

How to enumerate network interfaces in Ansible

Posted: 27 Jun 2021 02:11 PM PDT

I'd like to get an ordered list of the network interfaces on a machine using Ansible. Modern Linux systems don't use eth0, eth1, etc. So the names are unpredictable. On our network we connect the lowest numbered interface to the LAN and the highest to the WAN, so I can use the position of the interface in an ordered list to determine its function.

I am looking for the canonical way to do this in Ansible. So that I can use something like {{ansible_eth0.ipv4.address}}. (Where eth0 is some other name).

Even if I manually set a variable with the name of the interface there seems no way to get the IP of that interface (using the contents of the variable).

I'd like to process the Ansible facts to get what I want rather than running a shell script on the remote system.

exclude one subdomain from serving via https on apache

Posted: 27 Jun 2021 09:11 PM PDT

little intro:

I have SSL cert for specific domains (example.com, info.example.com, intranet.example.com, ...)

All domains except the info.example.com are running on Apache. info.example.com is running on windows server.

My problem
All sites are fully working, but I have problem with info.example.com which I dont want to server as "HTTPS" (yet). But I do have a permanent redirect in my VirtualHosts... and that makes the problem -> when I visit info.example.com, it redirect me to https://info.example.com which I don't want to. If I delete permanent redirect it will works, but I don't want to serve anything via http on my example.com.

<VirtualHost *:80>       ServerAdmin user@user.com       DocumentRoot "/var/www/html"       ServerName example.com       Redirect permanent / https://example.com/      <Directory "/var/www/html">          Options -Indexes FollowSymLinks          AllowOverride All          Order allow,deny          Allow from all      </Directory>  </VirtualHost>  

I wouldn't even ask here, because it must be something really stupid, but I'm struggling with this too long.

Excuse my english.

How do I tell when/if/why a container in a kubernetes cluster restarts?

Posted: 27 Jun 2021 07:05 PM PDT

I have a single node kubernetes cluster in google container engine to play around with.

Twice now, a small personal website I host in it has gone offline for a couple minutes. When I view the logs of the container, I see the normal startup sequence recently completed, so I assume a container died (or was killed?) and restarted.

How can I figure out the how & why of this happening?

Is there a way to get an alert whenever a container starts/stops unexpectedly?

php-fpm: locale settings change themselves

Posted: 27 Jun 2021 05:01 PM PDT

I experienced a bug with php-fpm : locale settings change themselves randomly.

Here are the correct locale settings:

Array  (      [decimal_point] => .      [thousands_sep] =>       [int_curr_symbol] =>       [currency_symbol] =>       [mon_decimal_point] =>       [mon_thousands_sep] =>       [positive_sign] =>       [negative_sign] =>       [int_frac_digits] => 127      [frac_digits] => 127      [p_cs_precedes] => 127      [p_sep_by_space] => 127      [n_cs_precedes] => 127      [n_sep_by_space] => 127      [p_sign_posn] => 127      [n_sign_posn] => 127      [grouping] => Array          (          )      [mon_grouping] => Array          (          )  )  

And here are the changed settings:

Array  (      [decimal_point] => ,      [thousands_sep] =>        [int_curr_symbol] => EUR       [currency_symbol] => €      [mon_decimal_point] => ,      [mon_thousands_sep] =>        [positive_sign] =>       [negative_sign] => -      [int_frac_digits] => 2      [frac_digits] => 2      [p_cs_precedes] => 0      [p_sep_by_space] => 1      [n_cs_precedes] => 0      [n_sep_by_space] => 1      [p_sign_posn] => 1      [n_sign_posn] => 1      [grouping] => Array          (              [0] => 3          )      [mon_grouping] => Array          (              [0] => 3          )  )  

The problem occurs randomly.

When removing php-fpm and using FastCGI, the problem doesn't occur anymore. How can I get this working with php-fpm ? The problem occurs on a shared hosting (we are the company which provides the hosting) and we really need php-fpm in order to use pools.

Thanks in advance!

EDIT : Today I discovered the problem occurs when we use the Ondemand Process Manager and not with the Static Process Manager.

/usr/sbin/thin_check: execvp failed: No such file or directory

Posted: 27 Jun 2021 04:01 PM PDT

Running CentOS 6.5, a brand new "minimal" server install, trying to use LVM2 thin pool feature. Packages installed are:

lvm2 (2.02.111) device-mapper-persistent-data-0.3.2-1.el6.x86_64

/etc/lvm/lvm.conf has

...  thin_check_executable = "/usr/sbin/thin_check"  thin_repair_executable = "/usr/sbin/thin_repair"  ...  

(And yes those files exist in the file system.)

Error message in /var/log/boot.log is:

Setting up Logical Volume Management:   /usr/sbin/thin_check: execvp failed: No such file or directory  Check of pool vg/pool failed (status:2). Manual repair required!  /usr/sbin/thin_check: execvp failed: No such file or directory  /usr/sbin/thin_check: execvp failed: No such file or directory  

I'm at a loss of what to do with this.

WSUS - Cannot have two servers clients in the same time

Posted: 27 Jun 2021 07:03 PM PDT

I'm running a WSUS within 2012 R2. I have two server, let's say Server1 and Server2. Both GPO are configured to use my WSUS server.

When I connect to WSUS server and launch the Update Services MMC, in the category "All Computers" I can see Server1, but not Server2.

When I run wuauclt.exe /resetauthorization /detectnow on Server2, the Update Services MMC shows up Server2, but no more Server1 ...

I don't understand. Those two servers are VMware VM deployed from a template, they are both in the same domain as WSUS server. All VMs ping each other, no network related problem.

Any ideas ?

Accessing a storage-side snapshot of a cluster-shared volume

Posted: 27 Jun 2021 08:03 PM PDT

From time to time I am in the situation where I need to get data back from storage-side snapshots of cluster shared volumes. I suppose I just never figured out a way to do it right, so I always needed to:

  1. expose the shadow copy as a separate LUN
  2. offline the original CSV in the cluster
  3. un-expose the LUN carrying the original CSV
  4. make sure my cluster nodes have detected the new LUN and no longer list the original one
  5. add the volume to the list of cluster volumes, promote it to be a CSV
  6. copy off the data I need
  7. undo steps 5. - 1. to revert to the original configuration

This is quite tedious and requires downtime for the original volume. Is there a better way to do this without involving a separate host outside of the cluster?

How can I exclude the admin area from the litespeed cache when running Magento?

Posted: 27 Jun 2021 09:11 PM PDT

I have litespeed on my server and am trying to use the cache system. I have followed the wiki instructions.

And have this in my Magento .htaccess

RewriteRule (.*\.php)?$ - [L,E=Cache-Control:max-age=120]  

The cache is working as I'm getting the X-response hit in the header but I can't find a way to exclude the admin area from the cache.

How do I configure MailScanner to use a remote clamd?

Posted: 27 Jun 2021 07:03 PM PDT

I decided to decrease the workload on my mail gateway by moving anti-virus processing to a separate server. I created the server, installed clamav-daemon on it, and tested it by running clamdscan from the mail gateway.

Satisfied, I then changed MailScanner as following:

Virus Scanners = clamd  Clamd Port = 3310  Clamd Socket = clamd server's IP address  

I restarted mailscanner, and got the following result:

MailScanner[45946]: Clamd::ERROR:: UNKNOWN CLAMD RETURN ./lstat() failed: No such file or directory. ERROR :: /var/spool/MailScanner/incoming/45946  

Obviously, MailScanner is not sending the file to be scanned. Instead, it is just telling it to scan a file which, obviously, does not exist on clamd's server.

I find it difficult to believe using clamd in this manner with mailscanner is not possible at all. Rather, I suspect I'm missing something. So... is it possible? If so, what configuration am I missing?

log openvpn traffic by squid

Posted: 27 Jun 2021 06:04 PM PDT

is any way to log openvpn traffic (sites that users visited)?

i think because openvpn use tunnel and encrypt traffic cannot log visited websites, but in a server i saw that open vpn's users traffic is logged; is anyway to do that?

Expand open_basedir for virtualhosts

Posted: 27 Jun 2021 04:01 PM PDT

All my virtualhosts have their own directive open_basedir, like:

php_admin_value open_basedir "/var/www/user/data:."

How can I add a path to open_basedir globaly to all virtualhosts?

Problem getting puppet to sync custom fact

Posted: 27 Jun 2021 05:01 PM PDT

I am having trouble getting puppet to sync a custom fact. I am using puppet version 0.25.4. The fact is inside a module as described in http://docs.reductivelabs.com/guides/plugins_in_modules.html

If I specify --pluginsync on the command line it does sync correctly, but does not otherwise even though I have pluginsync=true in my puppet.conf.

Is it correct that this command line option and the option in the puppet.conf should have the same behavior?

1 comment:

  1. E-Techbytes: Recent Questions - Server Fault >>>>> Download Now

    >>>>> Download Full

    E-Techbytes: Recent Questions - Server Fault >>>>> Download LINK

    >>>>> Download Now

    E-Techbytes: Recent Questions - Server Fault >>>>> Download Full

    >>>>> Download LINK AL

    ReplyDelete