Monday, April 4, 2022

Recent Questions - Server Fault

Recent Questions - Server Fault


Windows Events register ansible actions

Posted: 04 Apr 2022 04:05 AM PDT

Ansible: 2.9 Windows: W2k16 Server

Hi!

I search the method to register Ansible action in to target Windows host.

For example, when I work with win_command module for echo test command, I don't found lines refers this action at Windows Events.

Thank's.

Syncing docker-compose remote context state across multiple developers

Posted: 04 Apr 2022 03:52 AM PDT

Got a question about docker contexts, Fellas.

  • Assuming we make a docker-compose context for a remote machine (i.e., `docker context create aaa --docker "host=ssh://root@1.1.1.1")
  • And build it and run it on the remote machine (i.e., docker-compose --context aaa up -d)
  • Now the machine is running. Let's say I left my computer and my colleague using their computer wants to check some logs for the machine (i.e., run docker-compose --context aaa logs)

How can we sync the state of contexts? Should I prepare a file to run the first docker context create aaa command for them, and after that, docker-compose can recognize the running machine?

Cheers

ALB instead of Cloudfront in front of single server web app

Posted: 04 Apr 2022 03:26 AM PDT

AWS Recommends adding cloudfront in front of single server applications for security and performance see https://aws.amazon.com/blogs/networking-and-content-delivery/dynamic-whole-site-delivery-with-amazon-cloudfront/

I want to add it for an existing web app but the max time out of cloudfront is 180 seconds and I have some calls which are longer and will time out. I'm mainly interested in the security upside and less in the performance upside of this setup since this is for a web panel which does not require high speed delivery.

My question is - Is it a good solution to add an Application load balancer instead of dynamic cloudfront in front of the server in order to get the security benefits such as hiding the end server IP address , getting WAF and DDOS mitigation without the 180 seconds timeout issue ?

Are there downsides to doing it ?

Thanks

PostFix issue - trying to get MailHog to work

Posted: 04 Apr 2022 03:29 AM PDT

So mysteriously my previously-working MailHog stopped working this morning on my Mac Mojave.

I have followed the steps in this post to get MailHog running: https://www.joshstauffer.com/send-test-emails-on-a-mac-with-mailhog/

Postfix log file output: (log stream --predicate '(process == "smtpd") || (process == "smtp")' --info) Postfix log file output

My /etc/hosts file contains an entry for: 127.0.0.1 localhost

Also when I do the following I get (not sure if it's related):

[12:26:22][~]#nslookup localhost    Server:     192.168.0.1   Address:    192.168.0.1#53    ** server can't find localhost: NXDOMAIN  

My last section (previous all unchanged) of etc/postfix/main.cf config file:

#inet_protocols = all  inet_protocols = ipv4  message_size_limit = 10485760  mailbox_size_limit = 0  biff = no  mynetworks = 127.0.0.0/8, [::1]/128  smtpd_client_restrictions = permit_mynetworks permit_sasl_authenticated permit  recipient_delimiter = +  tls_random_source = dev:/dev/urandom  smtpd_tls_ciphers = medium  inet_interfaces = loopback-only    # Adding this doesnt work:  #mydestination = localhost    # For MailHog  myhostname = localhost  relayhost = [localhost]:1025  compatibility_level = 2  

Tried adding inet_protocols = ipv4 according to this post.

Any help much appreciated!

How to create a OneMesh network from two TP-Link routers?

Posted: 04 Apr 2022 02:46 AM PDT

I have a TP-Link Archer MR600 and C64 routers. I want to create a OneMesh between the two as the MR600 alone can not cover the whole house. I have an in-wall ethernet cable between the two ends of the house.

My idea is to connect the MR600 to the ISP modem, set up the wifi there and have that wifi used all around the house. Finally, I want to connect the C64 to the MR600 at the C64's WAN port to create a OneMesh.

Doing the above, the C64 receives an IP from the MR600, but no OneMesh devices are recognised. Shall I connect both modems to the ISP modem? Is there anything I need to configure in the C64 admin?

ansible - get list of all hostnames and corresponding ansible_host values from inventory

Posted: 04 Apr 2022 03:12 AM PDT

My inventory looks like this:

db0 ansible_host=10.0.0.1  db1 ansible_host=10.0.0.2  app0 ansible_host=10.0.0.3  app1 ansible_host=10.0.0.4  ...  

From this, I need to extract a list like this:

- name: db0    ip: 10.0.0.1  - name: db1    ip: 10.0.0.2  - name: app0    ip: 10.0.0.3  - name: app1    ip: 10.0.0.4  

I know I can get all hosts using groups['all'].

I can also get the ansible_host value for each host using hostvars['<hostname>']['ansible_host'].

How do I combine this to create the list I need?

trying to install openSuse Leap 15.3 alongside existing windows10 on my surface pro [migrated]

Posted: 04 Apr 2022 01:17 AM PDT

Device Spec

Windows Spec

Disk Partition in Windows

Now, when I restart and boot using a openSuse Leap 15.3 bootable USB, i expect to see a 100 GB disk that I could partition as required (root, home, swap etc). However, I see two disks of 470 GB, the meaning of which I dont understand. And worried that continuing with the opensuse recommendation will delete 400 GB of my data.

Opensuze recommendation for disk management, where I expect it to talk about the 100 GB of unallocated disk space. Instead it says delete the partition of 470 GB, which I do not know appeared from where (based on the disk partition I am seeing in Windows).

And if I try to modify the partitions using opensuse expert partitioner, this is what I see

I would like to keep my windows10, install opensuse on the 100GB unallocated space and be able to dualboot. I have absolutely no experience installing OS or disk partitions. Please advice.

Further research has lead me to believe that Windows is using something called a Storage Pool and I have in fact two physical hard drives like so. Now what do I do?

FBExport - use on Ubuntu

Posted: 04 Apr 2022 12:58 AM PDT

TL;DR : How use FBExport on Ubuntu / how export Firebird query result to csv file.

I would to like export query result from firebird database to csv file. On Windows I do similary job using FBExport.

Unfortunetly I don't know use this tool on Ubuntu.

I downloaded pack from http://www.firebirdfaq.org/fbexport.php

When I try run ./fbexport i got error:

./fbexport: error while loading shared libraries: libfbclient.so.2: cannot open shared object file: No such file or directory

Also I tried compile pack.

First I changed make file from:

###############################################################################  .SUFFIXES: .o .cpp    OBJECTS_FBE=fbexport/ParseArgs.o fbexport/FBExport.o fbexport/cli-main.o  OBJECTS_FBC=fbcopy/args.o fbcopy/fbcopy.o fbcopy/TableDependency.o fbcopy/main.o    # Compiler & linker flags  COMPILE_FLAGS=-O1 -DIBPP_LINUX -DIBPP_GCC -Iibpp  LINK_FLAGS=-pthread -lfbclient    #COMPILE_FLAGS=-O1 -DIBPP_WINDOWS -DIBPP_GCC -Iibpp  #LINK_FLAGS=    all:    exe/fbcopy exe/fbexport    exe/fbexport: $(OBJECTS_FBE) ibpp/all_in_one.o          g++ $(LINK_FLAGS) ibpp/all_in_one.o $(OBJECTS_FBE) -oexe/fbexport    exe/fbcopy: $(OBJECTS_FBC) ibpp/all_in_one.o          g++ $(LINK_FLAGS) ibpp/all_in_one.o $(OBJECTS_FBC) -oexe/fbcopy    # Linux only  #       FB2.0: g++ -pthread -lfbclient $(OBJECTS) -o$(EXENAME)  #       FB1.5: g++ -lfbclient $(OBJECTS) -o$(EXENAME)  #       FB1.0: g++ -lgds -lcrypt -lm $(OBJECTS) -o$(EXENAME)    install:          install exe/fbcopy /usr/bin/fbcopy          install exe/fbexport /usr/bin/fbexport    .cpp.o:          g++ -c $(COMPILE_FLAGS) -o $@ $<    clean:          rm -f fbcopy/*.o          rm -f ibpp/all_in_one.o          rm -f exe/fbcopy*          rm -f fbexport/*.o          rm -f exe/fbexport*    #EOF  

to:

###############################################################################  .SUFFIXES: .o .cpp    OBJECTS_FBE=fbexport/ParseArgs.o fbexport/FBExport.o fbexport/cli-main.o    # Compiler & linker flags  COMPILE_FLAGS=-O1 -DIBPP_LINUX -DIBPP_GCC -Iibpp  LINK_FLAGS=-pthread -lfbclient    #COMPILE_FLAGS=-O1 -DIBPP_WINDOWS -DIBPP_GCC -Iibpp  #LINK_FLAGS=    all:    exe/fbexport    exe/fbexport: $(OBJECTS_FBE) ibpp/all_in_one.o          g++ $(LINK_FLAGS) ibpp/all_in_one.o $(OBJECTS_FBE) -oexe/fbexport    # Linux only  #       FB2.0: g++ -pthread -lfbclient $(OBJECTS) -o$(EXENAME)  #       FB1.5: g++ -lfbclient $(OBJECTS) -o$(EXENAME)  #       FB1.0: g++ -lgds -lcrypt -lm $(OBJECTS) -o$(EXENAME)    install:          install exe/fbexport /usr/bin/fbexport    .cpp.o:          g++ -c $(COMPILE_FLAGS) -o $@ $<    clean:          rm -f ibpp/all_in_one.o          rm -f fbexport/*.o          rm -f exe/fbexport*    #EOF  

(because I like to compile only FBExport (excluding FBCopy))

After this change I tried run make in main folder.

Output:

user@apiserver:~/fbexport-1.90$ make  g++ -c -O1 -DIBPP_LINUX -DIBPP_GCC -Iibpp -o fbexport/ParseArgs.o fbexport/ParseArgs.cpp  g++ -c -O1 -DIBPP_LINUX -DIBPP_GCC -Iibpp -o fbexport/FBExport.o fbexport/FBExport.cpp  fbexport/FBExport.cpp: In member function 'std::string FBExport::CreateHumanString(IBPP::Statement&, int)':  fbexport/FBExport.cpp:318:29: warning: format '%ld' expects argument of type 'long int', but argument 3 has type 'int' [-Wformat=]    318 |             sprintf(str, "%ld", x);        |                           ~~^   ~        |                             |   |        |                             |   int        |                             long int        |                           %d  fbexport/FBExport.cpp:40:21: warning: format '%lli' expects argument of type 'long long int', but argument 3 has type 'int64_t' {aka 'long int'} [-Wformat=]     40 | #define INT64FORMAT "%lli"        |                     ^~~~~~  fbexport/FBExport.cpp:351:26: note: in expansion of macro 'INT64FORMAT'    351 |             sprintf(str, INT64FORMAT, int64val);        |                          ^~~~~~~~~~~  fbexport/FBExport.cpp:40:25: note: format string is defined here     40 | #define INT64FORMAT "%lli"        |                      ~~~^        |                         |        |                         long long int        |                      %li  fbexport/FBExport.cpp: In member function 'bool FBExport::CreateString(IBPP::Statement&, int, std::string&)':  fbexport/FBExport.cpp:429:29: warning: format '%ld' expects argument of type 'long int', but argument 3 has type 'int' [-Wformat=]    429 |             sprintf(str, "%ld", x);        |                           ~~^   ~        |                             |   |        |                             |   int        |                             long int        |                           %d  fbexport/FBExport.cpp:435:29: warning: format '%ld' expects argument of type 'long int', but argument 3 has type 'int' [-Wformat=]    435 |             sprintf(str, "%ld", d.GetDate());        |                           ~~^   ~~~~~~~~~~~        |                             |            |        |                             long int     int        |                           %d  fbexport/FBExport.cpp:440:29: warning: format '%ld' expects argument of type 'long int', but argument 3 has type 'int' [-Wformat=]    440 |             sprintf(str, "%ld", t.GetTime());        |                           ~~^   ~~~~~~~~~~~        |                             |            |        |                             long int     int        |                           %d  fbexport/FBExport.cpp:40:21: warning: format '%lli' expects argument of type 'long long int', but argument 3 has type 'int64_t' {aka 'long int'} [-Wformat=]     40 | #define INT64FORMAT "%lli"        |                     ^~~~~~  fbexport/FBExport.cpp:462:26: note: in expansion of macro 'INT64FORMAT'    462 |             sprintf(str, INT64FORMAT, int64val);        |                          ^~~~~~~~~~~  fbexport/FBExport.cpp:40:25: note: format string is defined here     40 | #define INT64FORMAT "%lli"        |                      ~~~^        |                         |        |                         long long int        |                      %li  fbexport/FBExport.cpp: In member function 'int FBExport::Export(IBPP::Statement&, FILE*)':  fbexport/FBExport.cpp:487:18: warning: ISO C++17 does not allow 'register' storage class specifier [-Wregister]    487 |     register int fc = st->Columns();        |                  ^~  fbexport/FBExport.cpp:491:23: warning: ISO C++17 does not allow 'register' storage class specifier [-Wregister]    491 |     for (register int i=1; i<=fc; i++)        |                       ^  fbexport/FBExport.cpp:505:27: warning: ISO C++17 does not allow 'register' storage class specifier [-Wregister]    505 |         for (register int i=1; i<=fc; i++)   // ... export all fields to file.        |                           ^  fbexport/FBExport.cpp: In member function 'int FBExport::ExportHuman(IBPP::Statement&, FILE*)':  fbexport/FBExport.cpp:829:18: warning: ISO C++17 does not allow 'register' storage class specifier [-Wregister]    829 |     register int fc = st->Columns();        |                  ^~  fbexport/FBExport.cpp:835:27: warning: ISO C++17 does not allow 'register' storage class specifier [-Wregister]    835 |         for (register int i=1; i<=fc; i++)   // output CSV header.        |                           ^  fbexport/FBExport.cpp:847:27: warning: ISO C++17 does not allow 'register' storage class specifier [-Wregister]    847 |         for (register int i=1; i<=fc; i++)   // ... export all fields to file.        |                           ^  fbexport/FBExport.cpp:860:27: warning: ISO C++17 does not allow 'register' storage class specifier [-Wregister]    860 |         for (register int i=1; i<=fc; i++)   // output CSV header.        |                           ^  fbexport/FBExport.cpp:875:27: warning: ISO C++17 does not allow 'register' storage class specifier [-Wregister]    875 |         for (register int i=1; i<=fc; i++)   // ... export all fields to file.        |                           ^  fbexport/FBExport.cpp: In function 'int statement_length(FILE*)':  fbexport/FBExport.cpp:1335:24: warning: ISO C++17 does not allow 'register' storage class specifier [-Wregister]   1335 |     register    int    c = 0, tmp = 0;        |                        ^  fbexport/FBExport.cpp:1335:31: warning: ISO C++17 does not allow 'register' storage class specifier [-Wregister]   1335 |     register    int    c = 0, tmp = 0;        |                               ^~~  fbexport/FBExport.cpp:1336:24: warning: ISO C++17 does not allow 'register' storage class specifier [-Wregister]   1336 |     register    int    l = 0;        |                        ^  fbexport/FBExport.cpp: In function 'char* read_statement(char*, int, FILE*)':  fbexport/FBExport.cpp:1376:24: warning: ISO C++17 does not allow 'register' storage class specifier [-Wregister]   1376 |     register    int    c = 0, tmp = 0;        |                        ^  fbexport/FBExport.cpp:1376:31: warning: ISO C++17 does not allow 'register' storage class specifier [-Wregister]   1376 |     register    int    c = 0, tmp = 0;        |                               ^~~  fbexport/FBExport.cpp:1377:25: warning: ISO C++17 does not allow 'register' storage class specifier [-Wregister]   1377 |     register    char   *P;        |                         ^  fbexport/FBExport.cpp: In member function 'std::string FBExport::CreateHumanString(IBPP::Statement&, int)':  fbexport/FBExport.cpp:339:17: warning: ignoring return value of 'char* gcvt(double, int, char*)' declared with attribute 'warn_unused_result' [-Wunused-result]    339 |             gcvt(fval, 19, str);        |             ~~~~^~~~~~~~~~~~~~~  fbexport/FBExport.cpp:345:17: warning: ignoring return value of 'char* gcvt(double, int, char*)' declared with attribute 'warn_unused_result' [-Wunused-result]    345 |             gcvt(dval, 19, str);        |             ~~~~^~~~~~~~~~~~~~~  fbexport/FBExport.cpp: In member function 'bool FBExport::CreateString(IBPP::Statement&, int, std::string&)':  fbexport/FBExport.cpp:452:17: warning: ignoring return value of 'char* gcvt(double, int, char*)' declared with attribute 'warn_unused_result' [-Wunused-result]    452 |             gcvt(fval, 19, str);        |             ~~~~^~~~~~~~~~~~~~~  fbexport/FBExport.cpp:457:17: warning: ignoring return value of 'char* gcvt(double, int, char*)' declared with attribute 'warn_unused_result' [-Wunused-result]    457 |             gcvt(dval, 19, str);        |             ~~~~^~~~~~~~~~~~~~~  fbexport/FBExport.cpp: In member function 'int FBExport::Import(IBPP::Statement&, FILE*)':  fbexport/FBExport.cpp:706:26: warning: ignoring return value of 'size_t fread(void*, size_t, size_t, FILE*)' declared with attribute 'warn_unused_result' [-Wunused-result]    706 |                     fread(buff, size, 1, fp);        |                     ~~~~~^~~~~~~~~~~~~~~~~~~  fbexport/FBExport.cpp: In member function 'int FBExport::Init(Arguments*)':  fbexport/FBExport.cpp:1211:41: warning: '__builtin___sprintf_chk' may write a terminating nul past the end of the destination [-Wformat-overflow=]   1211 |                         sprintf(num, "%d", i+1);        |                                         ^  In file included from /usr/include/stdio.h:888,                   from /usr/include/c++/11/cstdio:42,                   from /usr/include/c++/11/ext/string_conversions.h:43,                   from /usr/include/c++/11/bits/basic_string.h:6606,                   from /usr/include/c++/11/string:55,                   from ibpp/ibpp.h:91,                   from fbexport/FBExport.cpp:44:  /usr/include/x86_64-linux-gnu/bits/stdio2.h:38:34: note: '__builtin___sprintf_chk' output between 2 and 11 bytes into a destination of size 10     38 |   return __builtin___sprintf_chk (__s, __USE_FORTIFY_LEVEL - 1,        |          ~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~     39 |                                   __glibc_objsize (__s), __fmt,        |                                   ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~     40 |                                   __va_arg_pack ());        |                                   ~~~~~~~~~~~~~~~~~  g++ -c -O1 -DIBPP_LINUX -DIBPP_GCC -Iibpp -o fbexport/cli-main.o fbexport/cli-main.cpp  g++ -c -O1 -DIBPP_LINUX -DIBPP_GCC -Iibpp -o ibpp/all_in_one.o ibpp/all_in_one.cpp  g++ -pthread -lfbclient ibpp/all_in_one.o fbexport/ParseArgs.o fbexport/FBExport.o fbexport/cli-main.o -oexe/fbexport  

What can I do in this case?

Regards Tomasz

Nginx match any word in if statement except one

Posted: 04 Apr 2022 04:13 AM PDT

I'm using the following if statement regex in nginx config to match and block some bad requests inside request uri. These bad queries request always only one argument but every time with random name (and alphanum count). They also always hit the homepage.

Example of bad query: /?some1bad0query2=nminkvywbjfdysnvhp

if ($request_uri ~ "[\w]{5,25}\=[\w]{5,25}$") {    return 403;  }  

How can I modify this regex to exclude matching some argument names like key or query (ie. /?query=somestring) ?

I tried to use round brackets and ?(!query) but no luck.

Please help me correct this regex statement. Thanks in advance.

Host with bridge cannot ping other subnets

Posted: 04 Apr 2022 12:02 AM PDT

I'm having some trouble with my server. I set up a a bridge interface to use with my virtual machines, everything is working fine except for when I try to ping devices on a different subnet. I get a "host is unreachable" error. The most bizarre thing is that pinging other subnets from the virtual machines works. I'm using Ubuntu 20.04. Any help is appreciated.

Postfix Recipient address rejected: Access denied error after removing permit_mynetworks

Posted: 03 Apr 2022 10:15 PM PDT

I have a mail server which have to remove "permit_mynetworks" from Postfix configuration file(main.cf) in case of abuse. I only set it to allow SASL autheniented relay. But now, postfix would reject any foreign recipients. Can somebody please tell me what's wrong? Thanks in advance!

Here are the configurations: [main.cf]

# --------------------  # INSTALL-TIME CONFIGURATION INFORMATION  #  # location of the Postfix queue. Default is /var/spool/postfix.  queue_directory = /var/spool/postfix    # location of all postXXX commands. Default is /usr/sbin.  command_directory = /usr/sbin    # location of all Postfix daemon programs (i.e. programs listed in the  # master.cf file). This directory must be owned by root.  # Default is /usr/libexec/postfix  daemon_directory = /usr/lib/postfix/sbin    # location of Postfix-writable data files (caches, random numbers).  # This directory must be owned by the mail_owner account (see below).  # Default is /var/lib/postfix.  data_directory = /var/lib/postfix    # owner of the Postfix queue and of most Postfix daemon processes.  # Specify the name of a user account THAT DOES NOT SHARE ITS USER OR GROUP ID  # WITH OTHER ACCOUNTS AND THAT OWNS NO OTHER FILES OR PROCESSES ON THE SYSTEM.  # In particular, don't specify nobody or daemon. PLEASE USE A DEDICATED USER.  # Default is postfix.  mail_owner = postfix    # The following parameters are used when installing a new Postfix version.  #  # sendmail_path: The full pathname of the Postfix sendmail command.  # This is the Sendmail-compatible mail posting interface.  #  sendmail_path = /usr/sbin/sendmail    # newaliases_path: The full pathname of the Postfix newaliases command.  # This is the Sendmail-compatible command to build alias databases.  #  newaliases_path = /usr/bin/newaliases    # full pathname of the Postfix mailq command.  This is the Sendmail-compatible  # mail queue listing command.  mailq_path = /usr/bin/mailq    # group for mail submission and queue management commands.  # This must be a group name with a numerical group ID that is not shared with  # other accounts, not even with the Postfix account.  setgid_group = postdrop    # external command that is executed when a Postfix daemon program is run with  # the -D option.  #  # Use "command .. & sleep 5" so that the debugger can attach before  # the process marches on. If you use an X-based debugger, be sure to  # set up your XAUTHORITY environment variable before starting Postfix.  #  debugger_command =      PATH=/bin:/usr/bin:/usr/local/bin:/usr/X11R6/bin      ddd $daemon_directory/$process_name $process_id & sleep 5    debug_peer_level = 2    # --------------------  # CUSTOM SETTINGS  #    # SMTP server response code when recipient or domain not found.  unknown_local_recipient_reject_code = 550    # Do not notify local user.  biff = no    # Disable the rewriting of "site!user" into "user@site".  swap_bangpath = no    # Disable the rewriting of the form "user%domain" to "user@domain".  allow_percent_hack = no    # Allow recipient address start with '-'.  allow_min_user = no    # Disable the SMTP VRFY command. This stops some techniques used to  # harvest email addresses.  disable_vrfy_command = yes    # Enable both IPv4 and/or IPv6: ipv4, ipv6, all.  inet_protocols = all    # Enable all network interfaces.  inet_interfaces = all    #  # TLS settings.  #  # SSL key, certificate, CA  #  smtpd_tls_key_file = /etc/ssl/private/iRedMail.key  smtpd_tls_cert_file = /etc/ssl/certs/iRedMail.crt  smtpd_tls_CAfile = /etc/ssl/certs/iRedMail.crt  smtpd_tls_CApath = /etc/ssl/certs    #  # Disable SSLv2, SSLv3  #  smtpd_tls_protocols = !SSLv2 !SSLv3  smtpd_tls_mandatory_protocols = !SSLv2 !SSLv3    smtp_tls_protocols = !SSLv2 !SSLv3  smtp_tls_mandatory_protocols = !SSLv2 !SSLv3    lmtp_tls_protocols = !SSLv2 !SSLv3  lmtp_tls_mandatory_protocols = !SSLv2 !SSLv3    #  # Fix 'The Logjam Attack'.  #  smtpd_tls_exclude_ciphers = aNULL, eNULL, EXPORT, DES, RC4, MD5, PSK, aECDH, EDH-DSS-DES-CBC3-SHA, EDH-RSA-DES-CDC3-SHA, KRB5-DE5, CBC3-SHA  smtpd_tls_dh512_param_file = /etc/ssl/dh512_param.pem  smtpd_tls_dh1024_param_file = /etc/ssl/dh2048_param.pem    tls_random_source = dev:/dev/urandom    # Log only a summary message on TLS handshake completion — no logging of client  # certificate trust-chain verification errors if client certificate  # verification is not required. With Postfix 2.8 and earlier, log the summary  # message, peer certificate summary information and unconditionally log  # trust-chain verification errors.  smtp_tls_loglevel = 1  smtpd_tls_loglevel = 1    # Opportunistic TLS: announce STARTTLS support to remote SMTP clients, but do  # not require that clients use TLS encryption.  smtpd_tls_security_level = may    # Produce `Received:` message headers that include information about the  # protocol and cipher used, as well as the remote SMTP client CommonName and  # client certificate issuer CommonName.  # This is disabled by default, as the information may be modified in transit  # through other mail servers. Only information that was recorded by the final  # destination can be trusted.  #smtpd_tls_received_header = yes    # Opportunistic TLS, used when Postfix sends email to remote SMTP server.  # Use TLS if this is supported by the remote SMTP server, otherwise use  # plaintext.  # References:  #   - http://www.postfix.org/TLS_README.html#client_tls_may  #   - http://www.postfix.org/postconf.5.html#smtp_tls_security_level  smtp_tls_security_level = may    # Use the same CA file as smtpd.  smtp_tls_CApath = /etc/ssl/certs  smtp_tls_CAfile = $smtpd_tls_CAfile  smtp_tls_note_starttls_offer = yes    # Enable long, non-repeating, queue IDs (queue file names).  # The benefit of non-repeating names is simpler logfile analysis and easier  # queue migration (there is no need to run "postsuper" to change queue file  # names that don't match their message file inode number).  enable_long_queue_ids = yes    # Reject unlisted sender and recipient  smtpd_reject_unlisted_recipient = no  smtpd_reject_unlisted_sender = no    # Header and body checks with PCRE table  header_checks = pcre:/etc/postfix/header_checks  body_checks = pcre:/etc/postfix/body_checks.pcre    # A mechanism to transform commands from remote SMTP clients.  # This is a last-resort tool to work around client commands that break  # interoperability with the Postfix SMTP server. Other uses involve fault  # injection to test Postfix's handling of invalid commands.  # Requires Postfix-2.7+.  smtpd_command_filter = pcre:/etc/postfix/command_filter.pcre    # Relay restriction  smtpd_relay_restrictions =          permit_sasl_authenticated,          reject    # HELO restriction  smtpd_helo_required = yes  smtpd_helo_restrictions =      permit_sasl_authenticated      check_helo_access pcre:/etc/postfix/helo_access.pcre      reject_non_fqdn_helo_hostname      reject_unknown_helo_hostname    # Sender restrictions  smtpd_sender_restrictions =      permit_sasl_authenticated      permit_mynetworks      check_sender_access pcre:/etc/postfix/sender_access.pcre      reject    # Recipient restrictions  smtpd_recipient_restrictions =      check_policy_service inet:127.0.0.1:7777      permit_sasl_authenticated      permit_mynetworks      check_policy_service inet:127.0.0.1:12340      reject_unauth_destination    # END-OF-MESSAGE restrictions  smtpd_end_of_data_restrictions =      check_policy_service inet:127.0.0.1:7777    # Data restrictions  smtpd_data_restrictions = reject_unauth_pipelining    # SRS (Sender Rewriting Scheme) support  #sender_canonical_maps = tcp:127.0.0.1:7778  #sender_canonical_classes = envelope_sender  #recipient_canonical_maps = tcp:127.0.0.1:7779  #recipient_canonical_classes= envelope_recipient,header_recipient    proxy_read_maps = $canonical_maps $lmtp_generic_maps $local_recipient_maps $mydestination $mynetworks $recipient_bcc_maps $recipient_canonical_maps $relay_domains $relay_recipient_maps $relocated_maps $sender_bcc_maps $sender_canonical_maps $smtp_generic_maps $smtpd_sender_login_maps $transport_maps $virtual_alias_domains $virtual_alias_maps $virtual_mailbox_domains $virtual_mailbox_maps $smtpd_sender_restrictions $sender_dependent_relayhost_maps    # Avoid duplicate recipient messages. Default is 'yes'.  enable_original_recipient = no    # Virtual support.  virtual_minimum_uid = 2000  virtual_uid_maps = static:2000  virtual_gid_maps = static:2000  virtual_mailbox_base = /var/vmail    # Do not set virtual_alias_domains.  virtual_alias_domains =    #  # Enable SASL authentication on port 25 and force TLS-encrypted SASL authentication.  # WARNING: NOT RECOMMENDED to enable smtp auth on port 25, all end users should  #          be forced to submit email through port 587 instead.  #  smtpd_sasl_auth_enable = yes  smtpd_delay_reject = yes  smtpd_sasl_security_options = noanonymous  smtpd_tls_auth_only = no  smtpd_client_restrictions = permit_sasl_authenticated  broken_sasl_auth_clients = yes    # hostname  myhostname = mail.ads-network.top  myorigin = mail.ads-network.top  mydomain = mail.ads-network.top    # trusted SMTP clients which are allowed to relay mail through Postfix.  #  # Note: additional IP addresses/networks listed in mynetworks should be listed  #       in iRedAPD setting 'MYNETWORKS' (in `/opt/iredapd/settings.py`) too.  #       for example:  #  #       MYNETWORKS = ['xx.xx.xx.xx', 'xx.xx.xx.0/24', ...]  #  mynetworks = 127.0.0.1 [::1]    # Accepted local emails  mydestination = $myhostname, localhost, localhost.localdomain    alias_maps = hash:/etc/postfix/aliases  alias_database = hash:/etc/postfix/aliases    # Default message_size_limit.  message_size_limit = 15728640    # The set of characters that can separate a user name from its extension  # (example: user+foo), or a .forward file name from its extension (example:  # .forward+foo).  # Postfix 2.11 and later supports multiple characters.  recipient_delimiter = +    # The time after which the sender receives a copy of the message headers of  # mail that is still queued. Default setting is disabled (0h) by Postfix.  #delay_warning_time = 1h    # Do not display the name of the recipient table in the "User unknown" responses.  # The extra detail makes trouble shooting easier but also reveals information  # that is nobody elses business.  show_user_unknown_table_name = no  compatibility_level = 2  #  # Lookup virtual mail accounts  #  transport_maps =      proxy:mysql:/etc/postfix/mysql/transport_maps_user.cf      proxy:mysql:/etc/postfix/mysql/transport_maps_maillist.cf      proxy:mysql:/etc/postfix/mysql/transport_maps_domain.cf    sender_dependent_relayhost_maps =      proxy:mysql:/etc/postfix/mysql/sender_dependent_relayhost_maps.cf    # Lookup table with the SASL login names that own the sender (MAIL FROM) addresses.  smtpd_sender_login_maps =      proxy:mysql:/etc/postfix/mysql/sender_login_maps.cf    virtual_mailbox_domains =      proxy:mysql:/etc/postfix/mysql/virtual_mailbox_domains.cf    relay_domains =      $mydestination      proxy:mysql:/etc/postfix/mysql/relay_domains.cf    virtual_mailbox_maps =      proxy:mysql:/etc/postfix/mysql/virtual_mailbox_maps.cf    virtual_alias_maps =      proxy:mysql:/etc/postfix/mysql/virtual_alias_maps.cf      proxy:mysql:/etc/postfix/mysql/domain_alias_maps.cf      proxy:mysql:/etc/postfix/mysql/catchall_maps.cf      proxy:mysql:/etc/postfix/mysql/domain_alias_catchall_maps.cf    sender_bcc_maps =      proxy:mysql:/etc/postfix/mysql/sender_bcc_maps_user.cf      proxy:mysql:/etc/postfix/mysql/sender_bcc_maps_domain.cf    recipient_bcc_maps =      proxy:mysql:/etc/postfix/mysql/recipient_bcc_maps_user.cf      proxy:mysql:/etc/postfix/mysql/recipient_bcc_maps_domain.cf    #  # Postscreen  #  postscreen_greet_action = drop  postscreen_blacklist_action = drop  postscreen_dnsbl_action = drop  postscreen_dnsbl_threshold = 2    # Attention:  #   - zen.spamhaus.org free tire has 3 limits  #     (https://www.spamhaus.org/organization/dnsblusage/):  #  #     1) Your use of the Spamhaus DNSBLs is non-commercial*, and  #     2) Your email traffic is less than 100,000 SMTP connections per day, and  #     3) Your DNSBL query volume is less than 300,000 queries per day.  #  #   - FAQ: "Your DNSBL blocks nothing at all!"  #     https://www.spamhaus.org/faq/section/DNSBL%20Usage#261  #  # It's strongly recommended to use a local DNS server for cache.  postscreen_dnsbl_sites =      zen.spamhaus.org=127.0.0.[2..11]*3      b.barracudacentral.org=127.0.0.2*2    postscreen_dnsbl_reply_map = texthash:/etc/postfix/postscreen_dnsbl_reply  postscreen_access_list = permit_mynetworks cidr:/etc/postfix/postscreen_access.cidr    # Require Postfix-2.11+  postscreen_dnsbl_whitelist_threshold = -2    #  # Dovecot SASL support.  #  smtpd_sasl_type = dovecot  smtpd_sasl_path = private/dovecot-auth  virtual_transport = dovecot  dovecot_destination_recipient_limit = 1    #  # mlmmj - mailing list manager  #  mlmmj_destination_recipient_limit = 1    #  # Amavisd + SpamAssassin + ClamAV  #  content_filter = smtp-amavis:[127.0.0.1]:10024    # Concurrency per recipient limit.  smtp-amavis_destination_recipient_limit = 1000  relayhost =   

[master.cf]

#  # Postfix master process configuration file.  For details on the format  # of the file, see the master(5) manual page (command: "man 5 master" or  # on-line: http://www.postfix.org/master.5.html).  #  # Do not forget to execute "postfix reload" after editing this file.  #  # ==========================================================================  # service type  private unpriv  chroot  wakeup  maxproc command + args  #               (yes)   (yes)   (no)    (never) (100)  # ==========================================================================  #smtp      inet  n       -       y       -       1       postscreen  #smtpd     pass  -       -       y       -       -       smtpd  smtp      inet  n       -       -       -       -       smtpd  dnsblog   unix  -       -       y       -       0       dnsblog  tlsproxy  unix  -       -       y       -       0       tlsproxy  #submission inet n       -       y       -       -       smtpd  #  -o syslog_name=postfix/submission  #  -o smtpd_tls_security_level=encrypt  #  -o smtpd_sasl_auth_enable=yes  #  -o smtpd_tls_auth_only=yes  #  -o smtpd_reject_unlisted_recipient=no  #  -o smtpd_client_restrictions=$mua_client_restrictions  #  -o smtpd_helo_restrictions=$mua_helo_restrictions  #  -o smtpd_sender_restrictions=$mua_sender_restrictions  #  -o smtpd_recipient_restrictions=  #  -o smtpd_relay_restrictions=permit_sasl_authenticated,reject  #  -o milter_macro_daemon_name=ORIGINATING  #smtps     inet  n       -       y       -       -       smtpd  #  -o syslog_name=postfix/smtps  #  -o smtpd_tls_wrappermode=yes  #  -o smtpd_sasl_auth_enable=yes  #  -o smtpd_reject_unlisted_recipient=no  #  -o smtpd_client_restrictions=$mua_client_restrictions  #  -o smtpd_helo_restrictions=$mua_helo_restrictions  #  -o smtpd_sender_restrictions=$mua_sender_restrictions  #  -o smtpd_recipient_restrictions=  #  -o smtpd_relay_restrictions=permit_sasl_authenticated,reject  #  -o milter_macro_daemon_name=ORIGINATING  #628       inet  n       -       y       -       -       qmqpd  #smtp       inet  n       -       -       -       -       smtpd  pickup     unix  n       -       n       60      1       pickup      -o content_filter=smtp-amavis:[127.0.0.1]:10026  cleanup    unix  n       -       n       -       0       cleanup  #qmgr     unix  n       -       n       300     1       oqmgr  qmgr       unix  n       -       n       300     1       qmgr  tlsmgr     unix  -       -       n       1000?   1       tlsmgr  rewrite    unix  -       -       n       -       -       trivial-rewrite  bounce     unix  -       -       n       -       0       bounce  defer      unix  -       -       n       -       0       bounce  trace      unix  -       -       n       -       0       bounce  verify     unix  -       -       n       -       1       verify  flush      unix  n       -       n       1000?   0       flush  proxymap   unix  -       -       n       -       -       proxymap  proxywrite unix  -       -       n       -       1       proxymap  smtp       unix  -       -       n       -       -       smtp  #       -o smtp_helo_timeout=5 -o smtp_connect_timeout=5  relay      unix  -       -       n       -       -       smtp      -o syslog_name=postfix/$service_name  showq      unix  n       -       n       -       -       showq  error      unix  -       -       n       -       -       error  retry      unix  -       -       n       -       -       error  discard    unix  -       -       n       -       -       discard  local      unix  -       n       n       -       -       local  virtual    unix  -       n       n       -       -       virtual  lmtp       unix  -       -       n       -       -       lmtp  anvil      unix  -       -       n       -       1       anvil  scache     unix  -       -       n       -       1       scache  #  # ====================================================================  # Interfaces to non-Postfix software. Be sure to examine the manual  # pages of the non-Postfix software to find out what options it wants.  #  # Many of the following services use the Postfix pipe(8) delivery  # agent.  See the pipe(8) man page for information about ${recipient}  # and other message envelope options.  # ====================================================================  #  # maildrop. See the Postfix MAILDROP_README file for details.  # Also specify in main.cf: maildrop_destination_recipient_limit=1  #  postlog    unix-dgram n  -       n       -       1       postlogd  #  # ====================================================================  #  # Recent Cyrus versions can use the existing "lmtp" master.cf entry.  #  # Specify in cyrus.conf:  #   lmtp    cmd="lmtpd -a" listen="localhost:lmtp" proto=tcp4  #  # Specify in main.cf one or more of the following:  #  mailbox_transport = lmtp:inet:localhost  #  virtual_transport = lmtp:inet:localhost  #  # ====================================================================  #  # Cyrus 2.1.5 (Amos Gouaux)  # Also specify in main.cf: cyrus_destination_recipient_limit=1  #  #cyrus     unix  -       n       n       -       -       pipe  #  user=cyrus argv=/cyrus/bin/deliver -e -r ${sender} -m ${extension} ${user}  #  # ====================================================================  # Old example of delivery via Cyrus.  #  #old-cyrus unix  -       n       n       -       -       pipe  #  flags=R user=cyrus argv=/cyrus/bin/deliver -e -m ${extension} ${user}  #  # ====================================================================  #  # See the Postfix UUCP_README file for configuration details.  #  maildrop   unix  -       n       n       -       -       pipe flags=DRhu      user=vmail argv=/usr/bin/maildrop -d ${recipient}  #  # Other external delivery methods.  #  uucp       unix  -       n       n       -       -       pipe flags=Fqhu      user=uucp argv=uux -r -n -z -a$sender - $nexthop!rmail ($recipient)  ifmail     unix  -       n       n       -       -       pipe flags=F user=ftn      argv=/usr/lib/ifmail/ifmail -r $nexthop ($recipient)  bsmtp      unix  -       n       n       -       -       pipe flags=Fq.      user=bsmtp argv=/usr/lib/bsmtp/bsmtp -t$nexthop -f$sender $recipient  scalemail-backend unix - n       n       -       2       pipe flags=R      user=scalemail argv=/usr/lib/scalemail/bin/scalemail-store ${nexthop}      ${user} ${extension}    mailman    unix  -       n       n       -       -       pipe flags=FR      user=list argv=/usr/lib/mailman/bin/postfix-to-mailman.py ${nexthop}      ${user}  # Submission, port 587, force TLS connection.  submission inet n       -       n       -       -       smtpd    -o syslog_name=postfix/submission    -o smtpd_tls_security_level=encrypt    -o smtpd_sasl_auth_enable=yes    -o smtpd_client_restrictions=permit_sasl_authenticated,reject    -o content_filter=smtp-amavis:[127.0.0.1]:10026    # smtps, port 465, force SSL connection.  465 inet  n       -       n       -       -       smtpd    -o syslog_name=postfix/smtps    -o smtpd_tls_wrappermode=yes    -o smtpd_sasl_auth_enable=yes    -o smtpd_client_restrictions=permit_sasl_authenticated,reject    -o content_filter=smtp-amavis:[127.0.0.1]:10026    # Use dovecot's `deliver` program as LDA.  dovecot unix    -       n       n       -       -      pipe      flags=DRh user=vmail:vmail argv=/usr/lib/dovecot/deliver -f ${sender} -d ${user}@${domain} -m ${extension}    # mlmmj - mailing list manager  # ${nexthop} is '%d/%u' in transport ('mlmmj:%d/%u')  mlmmj   unix  -       n       n       -       -       pipe      flags=ORhu user=mlmmj:mlmmj argv=/usr/bin/mlmmj-amime-receive -L /var/vmail/mlmmj/${nexthop}    # Amavisd integration.  smtp-amavis unix -  -   n   -   4  smtp      -o syslog_name=postfix/amavis      -o smtp_data_done_timeout=1200      -o smtp_send_xforward_command=yes      -o disable_dns_lookups=yes      -o max_use=20    # smtp port used by Amavisd to re-inject scanned email back to Postfix  127.0.0.1:10025 inet n  -   n   -   -  smtpd      -o syslog_name=postfix/10025      -o content_filter=      -o mynetworks_style=host      -o mynetworks=127.0.0.0/8      -o local_recipient_maps=      -o relay_recipient_maps=      -o strict_rfc821_envelopes=yes      -o smtp_tls_security_level=none      -o smtpd_tls_security_level=none      -o smtpd_restriction_classes=      -o smtpd_delay_reject=no      -o smtpd_client_restrictions=permit_mynetworks,reject      -o smtpd_helo_restrictions=      -o smtpd_sender_restrictions=      -o smtpd_recipient_restrictions=permit_sasl_authenticated,reject      -o smtpd_end_of_data_restrictions=      -o smtpd_error_sleep_time=0      -o smtpd_soft_error_limit=1001      -o smtpd_hard_error_limit=1000      -o smtpd_client_connection_count_limit=0      -o smtpd_client_connection_rate_limit=0      -o receive_override_options=no_header_body_checks,no_unknown_recipient_checks,no_address_mappings    # smtp port used by mlmmj to re-inject scanned email back to Postfix, with  # address mapping support  127.0.0.1:10028 inet n  -   n   -   -  smtpd      -o syslog_name=postfix/10028      -o content_filter=      -o mynetworks_style=host      -o mynetworks=127.0.0.0/8      -o local_recipient_maps=      -o relay_recipient_maps=      -o strict_rfc821_envelopes=yes      -o smtp_tls_security_level=none      -o smtpd_tls_security_level=none      -o smtpd_restriction_classes=      -o smtpd_delay_reject=no      -o smtpd_client_restrictions=permit_mynetworks,reject      -o smtpd_helo_restrictions=      -o smtpd_sender_restrictions=      -o smtpd_recipient_restrictions=permit_mynetworks,reject      -o smtpd_end_of_data_restrictions=      -o smtpd_error_sleep_time=0      -o smtpd_soft_error_limit=1001      -o smtpd_hard_error_limit=1000      -o smtpd_client_connection_count_limit=0      -o smtpd_client_connection_rate_limit=0      -o receive_override_options=no_header_body_checks,no_unknown_recipient_checks  

Also, here is the DIAG reported by postfix:

Diagnostic-Code: smtp; 554 5.7.1 id=17953-16 - Rejected by next-hop MTA on      relaying, from MTA(smtp:[127.0.0.1]:10025): 554 5.7.1      <******@outlook.com>: Recipient address rejected: Access denied  

Could someone please explain the difference between these expressions

Posted: 03 Apr 2022 09:51 PM PDT

drop proto tcp and not (dst port 80)    pass proto tcp and dst port 80  

Is there a difference between them?

How should I migrate a zfs pool to Windows Storage Spaces?

Posted: 03 Apr 2022 09:36 PM PDT

I have a single hard drive that I want to migrate to Windows Storage Spaces ReFS file system for better Windows compatibility. Obviously copying all the data to another disk and back is a last resort; so, is there a better way?

SPF FAIL with IP 0.0.0.0 when sending via script

Posted: 03 Apr 2022 09:44 PM PDT

I have no problem in sending email to gmail from my domain (ie: support@mycompany.com can send and receive email well). The recipient can receive the email and there is no warning message. This is if I use a normal sending, ie: send from outlook or other mail clients.

It's not easy to get to this stage; I need to ensure that in my cPanel, all the DKIM, SPF are properly set and validated or else the Gmail will give me a big yellow warning message Gmail could not verify that it actually came from mycompany.com.

The problem arises when I send from a .Net script, then if the recipient is Gmail, even though he can receive the email, but Gmail will warn him that

Gmail could not verify that it actually came from mycompany.com. Avoid clicking links, downloading attachments, or replying with personal information.

When I click on "Show Original" for the email, then I will get a message saying that

SPF: FAIL with IP 0.0.0.0

How to ensure that script sending is also working properly? Bear in mind that I got no problem with normal sending!

Here's how my .Net script looks like, do I miss anything?

var message = new MailMessage(sender, receipient);  message.Headers.Add("Precedence", "bulk");  message.Subject = "From scripts";  message.IsBodyHtml = true;  message.Body = "A test email";  var smtpClient = new SmtpClient(mailserver, 587);  smtpClient.Credentials = new NetworkCredential(senderemail, password);  smtpClient.Send(message);  

I've read a few threads such as this, but they don't seem to apply to my problem because it doesn't explain why normal sending passes but script sending fails.

Outgoing http calls fail randomly - no route to host

Posted: 04 Apr 2022 12:43 AM PDT

A while ago a strange problem occurred in our kubernetes cluster. We have a network containing windows servers (webserver, mailserver, etc.) and a kubernetes cluster running Rancher v2.6.0.

The cluster is communicating with the windows server via http requests and smtp/imap to send and read emails. For a while now random http requests fail with the error message no route to host. It seems to only be limited to connections within the network and not affecting requests to third party apis. And the error does not always occur. A lot of requests go through without any issues and some fail. I implemented a retry-policy to try the same request again a few seconds later and sometimes it works on the first retry, sometimes the second and sometimes not at all.

I tried to google for a solution but I couldn't come up with anything, especially since only a percentage of all requests are affected.

Our sysadmin maintaining the network and windows server cannot identify any issues or even see the requests. So my guess is that the requests do not leave the cluster.. if that makes sense.

Unfortunately the kubernetes cluster used to be maintained by a colleague who is not available anymore. I'd be very grateful for suggestions where to start looking for a solution.

Ingress nginx-controller - failed for volume “webhook-cert”

Posted: 04 Apr 2022 02:30 AM PDT

I runed kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.48.1/deploy/static/provider/aws/deploy.yaml

But it didn't run.

Events:  Type     Reason       Age                     From               Message  ----     ------       ----                    ----               -------    Normal   Scheduled    8m56s                   default-scheduler  Successfully assigned ingress-nginx/ingress-nginx-controller-68649d49b8-g5r58 to ip-10-40-0-32.ap-northeast-2.compute.internal    Warning  FailedMount  8m56s (x2 over 8m56s)   kubelet            MountVolume.SetUp failed for volume "webhook-cert" : secret "ingress-nginx-admission" not found    Normal   Killing      7m56s                   kubelet            Container controller failed liveness probe, will be restarted    Normal   Pulled       7m45s (x2 over 8m54s)   kubelet            Container image "k8s.gcr.io/ingress-nginx/controller:v0.48.1@sha256:e9fb216ace49dfa4a5983b183067e97496e7a8b307d2093f4278cd550c303899" already present on machine    Normal   Created      7m45s (x2 over 8m54s)   kubelet            Created container controller    Normal   Started      7m45s (x2 over 8m53s)   kubelet            Started container controller    Warning  Unhealthy    7m16s (x7 over 8m36s)   kubelet            Liveness probe failed: HTTP probe failed with statuscode: 500    Warning  Unhealthy    3m46s (x30 over 8m36s)  kubelet            Readiness probe failed: HTTP probe failed with statuscode: 500  

logs...

Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.  {"level":"info",  "msg":"patching webhook configurations 'ingress-nginx-admission' mutating=false, validating=true, failurePolicy=Fail",  "source":"k8s/k8s.go:39",  "time":"2021-08-17T18:08:40Z"  }  {"err":"the server could not find the requested resource",  "level":"fatal",  "msg":"failed getting validating webhook",  "source":"k8s/k8s.go:48","time":"2021-08-17T18:08:40Z"  }  

I tried changing the deployment's --ingress-class=nginx to --ingress-class=nginx2, or installing v0.35, or i've tried kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx /controller-v0.48.1/deploy/static/provider/baremetal/deploy.yaml

But the same error repeats.

Environment: kubeadm version : v1.22.0 docker version : 20.10.7 os : ubuntu I am using aws ec2 instance.

Postfix not receiving external mail on Azure virtual machine

Posted: 04 Apr 2022 02:07 AM PDT

I am trying to configure Postfix to receive emails. I am finding pieces to the puzzle online, but no combination of things seem to work. Emails never show up.

Here is my current setup:

  • Ubuntu VM running in Azure with Postfix installed. I can send email locally on the server to itself.
  • Purchased a domain from a registrar and configured it to point at a DNS Zone setup in Azure.
  • Created A and MX records in this DNS Zone. These records have propagated.

Below is my DNS Zone: (The IP address is the public IP address of my Azure virtual machine)

enter image description here

One of the links I found said that my MX record should point to myVMNAME.cloudapp.net. I tried this, but when I looked up the MX record with MX Toolbox showed "No A Record".

I am not sure where to go from here. Is there something that needs to be changed in the Postfix main.cf file? Am I missing a DNS entry somewhere?

Curl + Socks5 - escape authentification data

Posted: 04 Apr 2022 04:03 AM PDT

I'm trying to use Curl with Socks5 proxy which needs authentication:

curl -v -x socks5://user:password@PROXY_SERVER_IP:PROXY_PORT http://checkip.amazonaws.com  

However, my login is email address and password contains an asterisk. I am trying to escape special characters and make it work, but nothing I tried so far worked. Anyone can help?

How can I add a certificate to a Windows service's certificate store from the command line?

Posted: 04 Apr 2022 01:03 AM PDT

I want to add a certificate to the certificate store belonging to a Windows service, from the command line. So far, the only thing I've found is:

certutil -service -store ADAM_Instance-Name\My  

When I run it (logged on as myself, in a Command Prompt as Administrator) it returns:

ADAM_Instance-Name\My  CertUtil: -store command FAILED: 0x80070057 (WIN32: 87)  CertUtil: The parameter is incorrect.  

I've tried wrapping the Service\Store name in double quotes (same result) and single quotes (same result) and using a forward slash or space instead of the backslash, both giving:

ADAM_Instance-Name\My  CertUtil: -store command FAILED: 0x80070002 (WIN32: 2)  CertUtil: The system cannot find the file specified.  

Can anyone help with the syntax for this command, or help with an alternative method?

Sync of SYSVOL content between Windows 2016 Domain Controllers

Posted: 04 Apr 2022 03:01 AM PDT

I have the following setup:

  • Two Domain Controllers in different sites (both Windows Server 2016) -The sites are permanently connected via a VPN (so the servers can directly reach each other)
  • The Domain Controllers are in different subnets
  • The Domain Controllers are both Global Catalogs

The problem I have is with the syncing/replication of SYSVOL content. It was syncing fine, but after the reboot of one of the servers it doesn't seem to sync/replicate anymore, while GPOs still sync/replicate without any problem.

Are there any ways to debug the replication of the SYSVOL content, or tools you would recommend to monitor the SYSVOL replication?

Thanks

How to debug dnsmasq requiring a service restart in order to work?

Posted: 03 Apr 2022 11:07 PM PDT

On a server of mine, I have dnsmasq setup; it is configured as another server, where it works (although, with other hardware and o/s).

On this specific host though, dnsmasq doesn't work after boot (the clients can't resolve names), but it does, if I manually restart it (service dnsmasq restart).

I can't figure out anything from the logs, which don't show any problem. Extract of syslog, immediately after boot:

15:Apr 13 12:31:39 <server_hostname> systemd[1]: Stopping dnsmasq - A lightweight DHCP and caching DNS server...  276:Apr 13 12:32:22 <server_hostname> systemd[1]: Starting dnsmasq - A lightweight DHCP and caching DNS server...  285:Apr 13 12:32:22 <server_hostname> dnsmasq[592]: dnsmasq: syntax check OK.  511:Apr 13 12:32:22 <server_hostname> dnsmasq[622]: started, version 2.75 cachesize 150  512:Apr 13 12:32:22 <server_hostname> dnsmasq[622]: compile time options: IPv6 GNU-getopt DBus i18n IDN DHCP DHCPv6 no-Lua TFTP conntrack ipset auth DNSSEC loop-detect inotify  513:Apr 13 12:32:22 <server_hostname> dnsmasq[622]: DNS service limited to local subnets  514:Apr 13 12:32:22 <server_hostname> dnsmasq[622]: warning: ignoring resolv-file flag because no-resolv is set  515:Apr 13 12:32:22 <server_hostname> dnsmasq-dhcp[622]: DHCP, IP range 192.168.166.2 -- 192.168.166.254, lease time 1h  516:Apr 13 12:32:22 <server_hostname> dnsmasq[622]: using nameserver 209.222.18.218#53  517:Apr 13 12:32:22 <server_hostname> dnsmasq[622]: using nameserver 209.222.18.222#53  518:Apr 13 12:32:22 <server_hostname> dnsmasq[622]: read /etc/hosts - 5 addresses  558:Apr 13 12:32:22 <server_hostname> systemd[1]: Started dnsmasq - A lightweight DHCP and caching DNS server.  568:Apr 13 12:32:24 <server_hostname> dnsmasq-dhcp[622]: DHCP packet received on eth1 which has no address  575:Apr 13 12:32:27 <server_hostname> dnsmasq-dhcp[622]: DHCP packet received on eth1 which has no address  589:Apr 13 12:32:32 <server_hostname> dnsmasq-dhcp[622]: DHCPDISCOVER(eth1) 192.168.166.129 <client_mac>  590:Apr 13 12:32:32 <server_hostname> dnsmasq-dhcp[622]: DHCPOFFER(eth1) 192.168.166.129 <client_mac>  591:Apr 13 12:32:32 <server_hostname> dnsmasq-dhcp[622]: DHCPREQUEST(eth1) 192.168.166.129 <client_mac>  592:Apr 13 12:32:32 <server_hostname> dnsmasq-dhcp[622]: DHCPACK(eth1) 192.168.166.129 <client_mac> <client_hostname>  

Entries added after executing service dnsmasq restart:

625:Apr 13 12:33:39 <server_hostname> systemd[1]: Stopping dnsmasq - A lightweight DHCP and caching DNS server...  626:Apr 13 12:33:39 <server_hostname> dnsmasq[622]: exiting on receipt of SIGTERM  627:Apr 13 12:33:39 <server_hostname> systemd[1]: Stopped dnsmasq - A lightweight DHCP and caching DNS server.  628:Apr 13 12:33:39 <server_hostname> systemd[1]: Starting dnsmasq - A lightweight DHCP and caching DNS server...  629:Apr 13 12:33:39 <server_hostname> dnsmasq[875]: dnsmasq: syntax check OK.  630:Apr 13 12:33:39 <server_hostname> dnsmasq[885]: started, version 2.75 cachesize 150  631:Apr 13 12:33:39 <server_hostname> dnsmasq[885]: compile time options: IPv6 GNU-getopt DBus i18n IDN DHCP DHCPv6 no-Lua TFTP conntrack ipset auth DNSSEC loop-detect inotify  632:Apr 13 12:33:39 <server_hostname> dnsmasq[885]: DNS service limited to local subnets  633:Apr 13 12:33:39 <server_hostname> dnsmasq[885]: warning: ignoring resolv-file flag because no-resolv is set  634:Apr 13 12:33:39 <server_hostname> dnsmasq-dhcp[885]: DHCP, IP range 192.168.166.2 -- 192.168.166.254, lease time 1h  635:Apr 13 12:33:39 <server_hostname> dnsmasq[885]: using nameserver 209.222.18.218#53  636:Apr 13 12:33:39 <server_hostname> dnsmasq[885]: using nameserver 209.222.18.222#53  637:Apr 13 12:33:39 <server_hostname> dnsmasq[885]: read /etc/hosts - 5 addresses  638:Apr 13 12:33:40 <server_hostname> systemd[1]: Started dnsmasq - A lightweight DHCP and caching DNS server.  

The server is an Ubuntu 16.04; dnsmasq version is 2.75-1ubuntu0.16.04.4.

The configuration (under /etc/dnsmasq.d/) is:

bind-interfaces  dhcp-range=eth1,192.168.100.2,192.168.100.254  server=<dns_server_1>  server=<dns_server_2>  no-resolv  

How can I debug this problem? What's the possible cause?

snmpget error: “No Such Object available on this agent at this OID”

Posted: 04 Apr 2022 02:06 AM PDT

I want to create my own MIB. I'm struggling on this from couple of weeks. I followed this tutorial and using net-snmp 5.7.3. What I'm doing is:

My setup: I have two VM's, both Ubuntu 16, one is snmp-server with IP:192.168.5.20 and the other snmp-agent with IP:192.168.5.21. I wrote a MIB, which compiles good without any error (This compilation is done only on the agent system, not on the server). I have already done this:

root@snmp-agent:# MIBS=+MAJOR-MIB      root@snmp-agent:# MIBS=+DEPENDENT-MIB      root@snmp-agent:# export MIBS      root@snmp-agent:# MIBS=ALL  

My MIB files are in this path: /usr/share/snmp/mibs which is the default search path. I've already compiled it and generated .c and .h files successfully with the command: mib2c -c mib2c.int_watch.conf objectName. And than configured the snmp like this:

root@snmp-agent:# ./configure --with-mib-modules="objectName"  root@snmp-agent:# make  root@snmp-agent:# make install      

Everything worked fine. After this when I do (on the agent) snmptranslate I get the output as:

root@snmp-agent:snmptranslate -IR objectName.0  MAJOR-MIB::objectName.0  

And with the command snmptranslate -On objectName.0 I get output as:

root@snmp-agent:# snmptranslate -On MAJOR-MIB::objectName.0  .1.3.6.1.4.1.4331.2.1.0  

So, I'm getting the expected outputs on the agent system. Now my problem is I don't know how to get the same values from my server!

When I run snmpget, from the server, I get this error:

root@snmp-server:# snmpget -v2c -c public 192.168.5.21 MAJOR-MIB::objectName.0  MAJOR-MIB::objectName.0 = No Such Instance currently exists at this OID  

Output when specified the OID:

root@snmp-server:# snmpget -v2c -c public 192.168.5.21 .1.3.6.1.4.1.4331.2.1  SNMPv2-SMI::enterprises.4331.2.1 = No Such Instance currently exists at this OID  

Output when I do these:

root@snmp-server:# snmpget -v2c -c public 192.168.5.21 sysDescr.0  SNMPv2-MIB::sysDescr.0 = STRING: Linux snmp-agent 4.10.0-33-generic #37~16.04.1-Ubuntu SMP Fri Aug 11 14:07:24 UTC 2017 x86_64    root@snmp-server:# snmpwalk -v2c -c public 192.168.5.21 .1.3.6.1.4.1.4331.2.1  SNMPv2-SMI::enterprises.4331.2.1 = No more variables left in this MIB View (It is past the end of the MIB tree)  

I have searched it and still searching but no luck. What should I do? How should I use snmpget from my server on my own MIBs? I mean something like I do with sysDescr.0 from my server.

I want to do this: snmpget 192.168.5.21 myObjectName.0 and get the values.

EDIT: I have already seen these answers, but doesn't works. snmp extend not working and snmp no such object...

UPDATE 2:

When I do snmpwalk on server:

snmp-server:# snmpwalk -v 2c -c ncs -m DISMAN-PING-MIB 192.168.5.21 .1.3.6.1.2.1.80  DISMAN-PING-MIB::pingObjects.0 = INTEGER: 1  DISMAN-PING-MIB::pingFullCompliance.15.46.49.46.51.46.54.46.49.46.50.46.49.46.56.48 = STRING: "/bin/echo"  DISMAN-PING-MIB::pingMinimumCompliance.15.46.49.46.51.46.54.46.49.46.50.46.49.46.56.48 = ""  DISMAN-PING-MIB::pingCompliances.4.15.46.49.46.51.46.54.46.49.46.50.46.49.46.56.48 = ""  DISMAN-PING-MIB::pingCompliances.5.15.46.49.46.51.46.54.46.49.46.50.46.49.46.56.48 = INTEGER: 5  DISMAN-PING-MIB::pingCompliances.6.15.46.49.46.51.46.54.46.49.46.50.46.49.46.56.48 = INTEGER: 1  DISMAN-PING-MIB::pingCompliances.7.15.46.49.46.51.46.54.46.49.46.50.46.49.46.56.48 = INTEGER: 1  DISMAN-PING-MIB::pingCompliances.20.15.46.49.46.51.46.54.46.49.46.50.46.49.46.56.48 = INTEGER: 4  DISMAN-PING-MIB::pingCompliances.21.15.46.49.46.51.46.54.46.49.46.50.46.49.46.56.48 = INTEGER: 1  DISMAN-PING-MIB::pingIcmpEcho.1.15.46.49.46.51.46.54.46.49.46.50.46.49.46.56.48 = ""  DISMAN-PING-MIB::pingIcmpEcho.2.15.46.49.46.51.46.54.46.49.46.50.46.49.46.56.48 = ""  DISMAN-PING-MIB::pingIcmpEcho.3.15.46.49.46.51.46.54.46.49.46.50.46.49.46.56.48 = INTEGER: 1  DISMAN-PING-MIB::pingIcmpEcho.4.15.46.49.46.51.46.54.46.49.46.50.46.49.46.56.48 = INTEGER: 0  DISMAN-PING-MIB::pingMIB.4.1.2.15.46.49.46.51.46.54.46.49.46.50.46.49.46.56.48.1 = ""  

When I do snmpget with pingFullCompliance.15.46.49.46.51.46.54.46.49.46.50.46.49.46.56.48:

root@snmp-server:# snmpget 192.168.5.21 DISMAN-PING-MIB::pingFullCompliance.15.46.49.46.51.46.54.46.49.46.50.46.49.46.56.48  DISMAN-PING-MIB::pingFullCompliance.15.46.49.46.51.46.54.46.49.46.50.46.49.46.56.48 = Wrong Type (should be INTEGER): STRING: "/bin/echo"  

So where am I going wrong? And what is pingFullCompliance.15.46.49.46.51.46.54.46.49.46.50.46.49.46.56.48 ? Why such a long OID?

Where am I going wrong? Can anyone point me in the right direction? Any suggestions are greatly appreciated.

Can't connect to a postgres server

Posted: 03 Apr 2022 10:05 PM PDT

I have a strange situation, I have created a second postgres server that will eventually become a slave to my current master, anyway for testing purposes I have currently installed postgres, and testing connections from other hosts.

In my postgresql.conf I have a nice an easy: listen_addresses = '*'# what IP address(es) to listen on; # comma-separated list of addresses; # defaults to 'localhost'; use '*' for all # (change requires restart) port = 5432

Then In my pg_hba.conf I have:

# allow all connections, testing only, comment in production  host    all             all             0.0.0.0/0               trust  

This postgres server is running in a freebsd jail, so has two IP addresses:

root@postgres ~# telnet 10.1.1.19 5432                       Trying 10.1.1.19...  telnet: connect to address 10.1.1.19: Connection refused  telnet: Unable to connect to remote host      root@postgres ~# telnet 127.0.1.19 5432                                                                                                                                                                          1  Trying 127.0.1.19...  Connected to 127.0.1.19.  Escape character is '^]'.  ^CConnection closed by foreign host.    root@postgres ~# ifconfig                                                                                                             em1: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500      options=4219b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,TSO4,WOL_MAGIC,VLAN_HWTSO>      ether 00:25:90:27:d8:24      inet 10.1.1.19 netmask 0xffffffff broadcast 10.1.1.19       media: Ethernet autoselect (1000baseT <full-duplex>)      status: active  lo1: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384      options=600003<RXCSUM,TXCSUM,RXCSUM_IPV6,TXCSUM_IPV6>      inet 127.0.1.19 netmask 0xffffffff       groups: lo   

So as you can see, I can on port 5432 but only using the loopback address, so then I try a simple

root@postgres ~# psql -h 127.0.1.19                     psql: FATAL:  no pg_hba.conf entry for host "127.0.1.19", user "root", database "root", SSL off  

Why is this basic connection getting blocked, as my hba.conf has ALL and in addition why can I not connect via the local address of 10.1.1.19?

netstat shows the following output:

Proto Recv-Q Send-Q Local Address          Foreign Address        (state)  tcp4       0      0 127.0.1.19.postgresql  *.*                    LISTEN  

NOTE.. I have another jail setup on another server with seemingly the same setup which works, thats on version 9.3.5 and this new server (with the issue) is on 9.6.3

EDIT: When changing config to listen on 0.0.0.0 I get a netstat output:

Proto Recv-Q Send-Q Local Address          Foreign Address        (state)  tcp4       0      0 127.0.1.19.postgresql  *.*                    LISTEN  tcp4       0      0 10.1.1.19.3100         *.*                    LISTEN  

You can see that SSH is successfully able to listen on the LAN address of 10.1.1.19 (on port 3100) so it can't be a jail networking issue, it must be something postgres related.

Install Language Pack On Windows Server Core (2012 R2)

Posted: 04 Apr 2022 01:03 AM PDT

I have language packs KB3012997 and KB2839636 staged and approved in Windows Server Update Services 2012 R2, but my Windows Server Core 2012 R2 clients refuse to install it. After googling the issue, it appears that these language pack updates are unable to be installed via WSUS, and have to be manually installed on the clients via the Language Control Panel. Unfortunately the Language Control Panel is not available on the Core edition of Windows server, both control.exe input.dll and control.exe /name Microsoft.Language do not work. I've tried installing the CAB files manually with dism /online /Add-Package /Package-Name:E:\WsusContent\65\F1C5505C26603C0E907DEDD5A4B3A0E6511E44C65.cab but the updates are not registered as being installed in the WSUS console.

How can I go about getting these language packs installed on Server Core 2012 R2? Yes I know these language packs do little to nothing on Server Core. And that I could work around this issue by creating separate groups in the WSUS console for the Core and non-Core editions of Windows Server, and approving these updates only for the non-Core editions. But to satisfy my autism i'd like to get these updates installed anyways, because if they really were never intended to target Core editions of Windows Server, i'm assuming the WSUS console wouldn't say my Core servers are applicable for them. Right now the only way I can think of is using a tool like Altiris RapidInstall or Sysinternals Process Monitor to see what file/registry changes are made while adding a language pack on a non-Core edition of Windows Server, after it has already been installed with dism.exe and then applying these changes to the Core edition servers.

Using a specific network adapter for a service

Posted: 04 Apr 2022 03:01 AM PDT

On a Synology NAS this is called: Service-interface binding

Is there something to specify the network adapter used by a certain service? I don't wan't all applications running on that adapter. I only wan't to run a few on that network adapter.

Else I would have to run a VM to get the same solution.

inactive option not working for pam_lastlog.so

Posted: 03 Apr 2022 11:07 PM PDT

I'm trying to set up my system to lock out inactive users after 10 days. I'm using CentOS 6.x, and looking at RHEL manual, this is what I found:

To lock out an account after 10 days of inactivity, add, as root,  the following line to the auth section of the /etc/pam.d/login file:  auth  required  pam_lastlog.so inactive=10  

So, this is my /etc/pam.d/login :

#%PAM-1.0  auth [user_unknown=ignore success=ok ignore=ignore default=bad] pam_securetty.so  auth       include      system-auth  auth       required     pam_lastlog.so inactive=10  account    required     pam_nologin.so  account    include      system-auth  password   include      system-auth  # pam_selinux.so close should be the first session rule  session    required     pam_selinux.so close  session    required     pam_loginuid.so  session    optional     pam_console.so  # pam_selinux.so open should only be followed by sessions to be executed in the user context  session    required     pam_selinux.so open  session    required     pam_namespace.so  session    optional     pam_keyinit.so force revoke  session    include      system-auth  -session   optional     pam_ck_connector.so  

I log in through ssh as a user, and log out.

After that I set up the time 1 year in the future, as root logged in on TTY1:

# date --set "...."  # hwclock --systohc  

I even reboot the VM, but still, when it gets back, I'm able to log in as user through ssh.

Any ideas what am I doing wrong here?

Reverse Proxy single sign-on

Posted: 04 Apr 2022 04:03 AM PDT

I have a reverse proxy handling ssl termination and mod_security. The issue is after sso reaches the backend server it tries to authenticate with cas directly instead of the through the proxy still, and since the backend server is inside our firewall, and only the proxy is in cas the authentication fails.

current configuration for reverse proxy:

  <Location /test/>     ProxyPreserveHost on     RequestHeader set WL-Proxy-SSL true     ProxyPass /TEST/ http://backend.server.com:7010/     ProxyPassReverse http://backend.server.com:7010/     ProxyHTMLURLMap http://backend.server.com:7010       Order allow,deny     Allow from all    </Location>      <Location /sso/>     ProxyPreserveHost on     RequestHeader set WL-Proxy-SSL true     ProxyPass http://backend.server.com:7007/sso/    ProxyPassReverse http://backend.server.com:7007/sso/    ProxyHTMLURLMap http://backend.server.com.edu:7007/sso    Order allow,deny    Allow from all  

It there some setting i am missing that causes the backend server to not continue with using the proxy name?

Tuning high-raffic nginx and wordpress server

Posted: 04 Apr 2022 02:06 AM PDT

I have been conducting load-tests (via blitz.io) as I attempt to tune server performance on a pool of servers running php 5.5, wordpress 3.9.1, and nginx 1.6.2.

My confusion arises when I overload a single server with too much traffic. I fully realize that there are finite resources on a server and at some level it will have to begin rejecting connections and/or returning 502 (or similar) responses. What's confusing me though, is why my server appears to be returning 502s so early within a load test.

I have attempted to tune nginx to accept several connections:

nginx.conf

worker_processes auto;  worker_rlimit_nofile 100000;    events {      worker_connections 1024;      use epoll;      multi_accept on;  }  

site.conf

location ~ \.php$ {        try_files $uri =404;        include /etc/nginx/fastcgi_params;        fastcgi_pass unix:/var/run/php5-fpm.sock;        fastcgi_index index.php;        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;        fastcgi_read_timeout 60s;        fastcgi_send_timeout 60s;        fastcgi_next_upstream_timeout 0;        fastcgi_connect_timeout 60s;     }  

php www.conf

pm = static  pm.max_children = 8  

I expect the load test to saturate the PHP workers rather quickly. But I also expect nginx to continue accepting connections and after the fast_cgi timeouts are hit, begin returning some sort of HTTP error code.

What I'm actually seeing is nginx returning 502s almost immediately after the test is launched.

nginx error.log

2014/11/01 20:35:24 [error] 16688#0: *25837 connect() to unix:/var/run/php5-fpm.sock failed   (11: Resource temporarily unavailable) while connecting to upstream, client: OBFUSCATED,   server: OBFUSCATED, request: "GET /?bust=1 HTTP/1.1", upstream:   "fastcgi://unix:/var/run/php5-fpm.sock:", host: "OBFUSCATED"  

What am I missing? Why aren't the pending requests being queued up, and then either completing or timing out later in the process?

php-fpm workers eating lots of memory even on zero traffic

Posted: 03 Apr 2022 10:05 PM PDT

htop dump http://i.stack.imgur.com/EgbDt.pngu

php-fpm workers are taking large amount of memory even if there has been zero traffic on the server for some time. What is this memory? Is it leaked memory (magento is on that pool) or is it some sort of php cache (I use just APC cache which should be in a shared memory somewhere though)?

Here is my config:

[www]    listen = 127.0.0.1:9000  listen.allowed_clients = 127.0.0.1    user = www  group = www    pm = dynamic      pm.max_children = 50  pm.start_servers = 5  pm.min_spare_servers = 5      pm.max_spare_servers = 35    slowlog = /var/log/php-fpm/www-slow.log    php_admin_value[error_log] = /var/log/php-fpm/www-error.log  php_admin_flag[log_errors] = on  php_admin_value[memory_limit] = 256M    php_value[session.save_handler] = files  php_value[session.save_path] = /var/lib/php/session  

EDIT: I know I am overcommiting my resources a lot here and I have already fixed that but I still wonder what this memory is and why php-fpm does not release it.

How do I fallback to boot from local hard drive in iPXE script?

Posted: 03 Apr 2022 10:27 PM PDT

I have a script being loaded from iPXE.

What I want is to make the script fall back to boot from a local hard drive (or CDROM) on failure to boot from the san.

The idea is allow installation of an Operating System onto the SAN target from local CDROM or USB drive.

I can't see anywhere that in the iPXE documentation that tells me how to boot from a local internal drive. How do I do this?

No comments:

Post a Comment