Wednesday, June 15, 2022

Recent Questions - Server Fault

Recent Questions - Server Fault


Retrieving Active Directory Users with no Email address from a certain group in powershell

Posted: 15 Jun 2022 04:16 PM PDT

I'm trying in powershell to output all AD users from a certain group with no data in the email address field.

I have the following command:

get-aduser -filter * -properties * | where {!$_.emailaddress} | select-object samaccountname | export-csv c:\email\noemailusers.csv  

But I'm trying to narrow down the results to only users who are member of a certain group.

Any help is much appreciated!

Why would kolla_external_vip_interface have to have an ip address configured, if it is used for a floating ip address?

Posted: 15 Jun 2022 03:21 PM PDT

Why would kolla_external_vip_interface need an IP address configured, when neutron_external_interface is supposed to be configured without an IP address?

As far as I understand it, both are supposed to get configured with floating IP addresses by OpenStack.

Relevant excerpts from globals.yml:

# This should be a VIP, an unused IP on your network that will float between  # the hosts running keepalived for high-availability. It defaults to the  # kolla_internal_vip_address, allowing internal and external communication to  # share the same address.  Specify a kolla_external_vip_address to separate  # internal and external requests between two VIPs.  #kolla_external_vip_address: "{{ kolla_internal_vip_address }}"  ...  # These can be adjusted for even more customization. The default is the same as  # the 'network_interface'. These interfaces must contain an IP address.  #kolla_external_vip_interface: "{{ network_interface }}"  ...  # This is the raw interface given to neutron as its external network port. Even  # though an IP address can exist on this interface, it will be unusable in most  # configurations. It is recommended this interface not be configured with any IP  # addresses for that reason.  #neutron_external_interface: "eth1"  

The reason I'm asking is because I would like to point neutron_external_interface and kolla_external_vip_interface to the same interface. I've actually tried that, not configuring an IP address for this interface, but kolla_external_vip_address, although being set on that interface, was not accessible (e.g. no routing was set up for the network/interface).

WG3526 modem can't connect to 4G/LTE with OpenWRT

Posted: 15 Jun 2022 02:41 PM PDT

I have a client with a WG3526 4G/LTE modem with a sim card slot.

I had to change the firmware so I can get detailed bandwidth usage.

My issue is since I installed the generic version of OpenWRT, the modem can't connect to mobile network anymore.

I know I need to install a new module in OpenWRT but I can't find which one.

I need help to find the required module and maybe to install it.

My software will not wait for a server connection before sending messages

Posted: 15 Jun 2022 02:30 PM PDT

I have a very odd problem. I have some software which is supposed to wait for a server connection to be established and then send a message. What's actually happening is that it sends the message right away before the connection is established and the message is never recieved.

However, I have found that playing with the ethernet settings I can get it to work once. For example I can disable "flow control" and then run the software. It will work one time, and then fail every time after that. I can then go back and re-enable "flow control" and it will work once. And then fail every time after that. It works the same for pretty much any other setting, I can enable/disable and get it to work once.

Surely there is some setting I am missing? It also works a single time after I restart the computer. One time only, and then it no longer waits for the server connection.

Also note this software works on every computer except this one that I am troubleshooting.

Puppet: Issues with Vcsrepo (git) ".... exists and is not the desired repository."

Posted: 15 Jun 2022 02:09 PM PDT

I use puppet/Vcsrepo to distribute and update software to a bunch of linux servers from a Bitbucket(cloud) server. This worked fine for years but about 6 months ago Puppet started complaining about every repository Error: Path /usr/local/tools/... exists and is not the desired repository. on every run. I think the issue may have started when we moved from on on prem version of bitbucket to the cloud version.

If I delete the path and run puppet the it replaces the directory and then barfs again on the next run. I have ended up deleting the repositories whenever I need them updated.

The puppet code has been simplified down to:

  define deploy(Array $names) {        $names.each |$repo| {        vcsrepo { "/usr/local/tools/$repo":          ensure   => present,          provider => git,          user     => 'tools',          source   => "https://xxxx@bitbucket.org/uoa/$repo.git",        }      }      }      .....      $names_list = [      'common-library',      'common-tools'    ]      ...::deploy {"base-tools":      names => $names_list,    }    

Any ideas what the issue is or how to diagnose the problem.

Mail domain share (iRedMail with Postfix and Gmail)

Posted: 15 Jun 2022 02:06 PM PDT

I'm trying to se up a mail server for a shared domain: half of the mailboxes are hosted on Gmail and the other half on a local server.

I installed iRedMail 1.6.0 and configured Gmail to send all unresolved @example.com users to the local server. It worked when the sender wasn't @example.com... if it was, mail got rejected because of the unauthenticated connection. Adding all gmail subnets to the mynetworks (both in main.cf and settings.py) solved this issue (any better solution?).

The real issue was making mails exit from the local postfix when the mailbox was remotely hosted.

I had to create a virtual mailbox map with the remote mailbox list (awful solution) to get past the "rcpt to:" command. Still after a while I got an undeliverable report:

Action: failed Status: 5.1.1 Diagnostic-Code: x-unix; user unknown

After some struggling, I found that adding also a transport map with about the same data of the virtual mailbox map (in the trasnport map I also have the local transport rule) made the issue disappear and the mails flow correctly.

Now, using 2 different address lists for the same job is even more awful. I'm looking for a more elegant solution like "forward any unresolved @example.com email to ...", as I have on Gmail. Any suggestion?

Thanks Dario

Why do I see unicast packets for a different IP when I sniff my interface?

Posted: 15 Jun 2022 01:54 PM PDT

I hook up a laptop via gigabit Ethernet to my corporate network and run Wireshark on the interface. I expect to see all broadcast and multicast traffic and unicast traffic either originating from or destined to my laptop's IP only.

For some reason, I also see all unicast traffic destined for one other IP on the network.

Why might that happen? Has anyone seen this behavior before and know what might cause it?

IIUC, switches are supposed to note the MAC address of packets received on a port, recording them in CAM, and route packets destined for that MAC address to just the port where something from that MAC was last received. In this case, the switch must not be receiving packets from the MAC address associated with that IP, apparently even though ARP resolved the IP to a MAC. As a result, since the MAC is not found in CAM, it broadcasts the packet to all ports. But what kind of odd configuration would cause that?

fdisk doesn't show the external SSD by USB

Posted: 15 Jun 2022 01:41 PM PDT

Good days. I was mounting my SSD in Linux and this were going well.

mkfs.exfat /dev/sdg1  

I mounted in a folder

mount /dev/sdg1 /owncloud  

This were going well. But I committed a wrong. The folder /owncloud is mounted, and without requiring, I created other folder in the root /ssd2t and I moved the folder /owncloud (actually mounted as /dev/sdg1) inside the folder /ssd2t.

/  |   \___ssd2t            \___owncloud  

The problem is that, I mounted the sdg1 as /owncloud and after I moved the \owncloud inside \ssd2t, without umount the sdg1. As result, now I do a fdisk -l and the SSD doesn't show. I disconnect and disconnect the SSD and the fdisk -l doesn't show.

Disk /dev/sda: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors  Disk model: ST1000DM010-2EP1  Units: sectors of 1 * 512 = 512 bytes  Sector size (logical/physical): 512 bytes / 4096 bytes  I/O size (minimum/optimal): 4096 bytes / 4096 bytes  Disklabel type: gpt  Disk identifier: 53CDB35D-584B-4C60-BCA3-038A47886F72    Device          Start        End    Sectors   Size Type  /dev/sda1        2048     206847     204800   100M EFI System  /dev/sda2      206848     239615      32768    16M Microsoft reserved  /dev/sda3      239616 1952452659 1952213044 930.9G Microsoft basic data  /dev/sda4  1952454656 1953521663    1067008   521M Windows recovery environment      Disk /dev/sdc: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors  Disk model: CT1000MX500SSD1   Units: sectors of 1 * 512 = 512 bytes  Sector size (logical/physical): 512 bytes / 4096 bytes  I/O size (minimum/optimal): 4096 bytes / 4096 bytes  Disklabel type: gpt  Disk identifier: 0B5656D5-779F-4552-9155-2A6F4CCE391A    Device          Start        End    Sectors   Size Type  /dev/sdc1        2048    1050623    1048576   512M EFI System  /dev/sdc2     1050624 1951522815 1950472192 930.1G Linux filesystem  /dev/sdc3  1951522816 1953523711    2000896   977M Linux swap  

I do a lsusb and this shows the USB connector.

Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub  Bus 003 Device 002: ID 046d:c077 Logitech, Inc. M105 Optical Mouse  Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub  Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub  Bus 001 Device 004: ID 0bda:0151 Realtek Semiconductor Corp. Mass Storage Device (Multicard Reader)  Bus 001 Device 009: ID 1e3d:198a Chipsbank Microelectronics Co., Ltd Flash Disk  Bus 001 Device 003: ID 1a2c:0e24 China Resource Semico Co., Ltd USB Keyboard  Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub  

is

Bus 001 Device 009: ID 1e3d:198a Chipsbank Microelectronics Co., Ltd Flash Disk  

and the dmesg shows

[ 1903.472577] usb 1-6: new high-speed USB device number 9 using xhci_hcd  [ 1903.697481] usb 1-6: New USB device found, idVendor=1e3d, idProduct=198a, bcdDevice= 1.00  [ 1903.697489] usb 1-6: New USB device strings: Mfr=0, Product=0, SerialNumber=0  [ 1903.704163] usb-storage 1-6:1.0: USB Mass Storage device detected  [ 1903.704353] scsi host9: usb-storage 1-6:1.0  [ 1904.715545] scsi 9:0:0:0: Direct-Access     ChipsBnk Flash Disk       5.00 PQ: 0 ANSI: 2  [ 1904.716037] sd 9:0:0:0: Attached scsi generic sg1 type 0  [ 1904.780749] sd 9:0:0:0: [sdb] Attached SCSI removable disk  [ 1913.489121] usb 1-9: reset high-speed USB device number 4 using xhci_hcd  [ 1915.437504] usb 1-9: reset high-speed USB device number 4 using xhci_hcd  [ 1924.561365] usb 1-9: reset high-speed USB device number 4 using xhci_hcd  [ 1950.162273] usb 1-9: reset high-speed USB device number 4 using xhci_hcd  [ 1953.169931] usb 1-9: reset high-speed USB device number 4 using xhci_hcd  [ 1970.258609] usb 1-9: reset high-speed USB device number 4 using xhci_hcd  [ 1970.630629] usb 1-9: reset high-speed USB device number 4 using xhci_hcd  [ 1991.087047] usb 1-9: reset high-speed USB device number 4 using xhci_hcd  [ 1991.458712] usb 1-9: reset high-speed USB device number 4 using xhci_hcd  [ 1991.834694] usb 1-9: reset high-speed USB device number 4 using xhci_hcd  [ 1992.210709] usb 1-9: reset high-speed USB device number 4 using xhci_hcd  [ 1994.614771] usb 1-9: reset high-speed USB device number 4 using xhci_hcd  [ 1994.938784] usb 1-9: reset high-speed USB device number 4 using xhci_hcd  [ 2006.639382] usb 1-9: reset high-speed USB device number 4 using xhci_hcd  [ 2007.011386] usb 1-9: reset high-speed USB device number 4 using xhci_hcd  [ 2009.427432] usb 1-9: reset high-speed USB device number 4 using xhci_hcd  [ 2015.475201] usb 1-9: reset high-speed USB device number 4 using xhci_hcd  [ 2045.616274] usb 1-9: reset high-speed USB device number 4 using xhci_hcd  

In the output shows

[ 1904.716037] sd 9:0:0:0: Attached scsi generic sg1 type 0  [ 1904.780749] sd 9:0:0:0: [sdb] Attached SCSI removable disk  

This is a wrong?

As I can resolve it? minim for that fdisk -l shows the external SSD.

Thank you very much.

Detach GCP persistent disk from source snapshot

Posted: 15 Jun 2022 05:14 PM PDT

After creating a disk from a source snapshot and deleting the snapshot, the disk continues to show a source snapshot, even though that doesn't exist anymore.

Is this something that's going to get cleaned up at some point? It's not clear what purpose that attribute has on the disk object. It also messes up the Terraform configuration by requiring a parameter that's not needed in "regular" disks created from scratch.

Because of this attribute, Terraform wants to recreate the disk. Maybe this is a bug in the GCP provider.

"postconf: warning: /etc/postfix/main.cf: undefined parameter: mail" when running the command "postconf default_process_limit" in Ubuntu 20.04 Server

Posted: 15 Jun 2022 03:02 PM PDT

I am running an Ubuntu 20.04 LEMP Server. It is an Email and Web Server. I am having a puzzling warning in my postconf messages that I need help fixing.

When I run the command

postconf default_process_limit

it returns the warning message:

postconf: warning: /etc/postfix/main.cf: undefined parameter: mail

Below is my /etc/postfix/main.cf file.

# See /usr/share/postfix/main.cf.dist for a commented, more complete version    # Debian specific:  Specifying a file name will cause the first  # line of that file to be used as the name.  The Debian default  # is /etc/mailname.  #myorigin = /etc/mailname    #smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu)  smtpd_banner = $myhostname ESMTP  biff = no    # appending .domain is the MUA's job.  append_dot_mydomain = no    # Uncomment the next line to generate "delayed mail" warnings  #delay_warning_time = 4h    readme_directory = no    # See http://www.postfix.org/COMPATIBILITY_README.html -- default to 2 on  # fresh installs.  compatibility_level = 2    # TLS parameters  # Don't use self-signed certificates  #smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem  #smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key  #smtpd_tls_security_level=may    ### Linuxbabe  # Enable TLS Encryption when Postfix receives incoming emails  smtpd_tls_cert_file=/etc/letsencrypt/live/mail.facl.xyz/fullchain.pem  smtpd_tls_key_file=/etc/letsencrypt/live/mail.facl.xyz/privkey.pem  smtpd_tls_security_level=may  smtpd_tls_loglevel = 1  smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache    ### Linuxbabe  #Enable TLS Encryption when Postfix sends outgoing emails  smtp_tls_security_level = may # Original by Linuxbabe. Use this if emails break.  #smtp_tls_security_level = encrypt  smtp_tls_loglevel = 1  smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache    ### Linuxbabe  #Enforce TLSv1.3 or TLSv1.2  smtpd_tls_mandatory_protocols = !SSLv2, !SSLv3, !TLSv1, !TLSv1.1  smtpd_tls_protocols = !SSLv2, !SSLv3, !TLSv1, !TLSv1.1  smtp_tls_mandatory_protocols = !SSLv2, !SSLv3, !TLSv1, !TLSv1.1  smtp_tls_protocols = !SSLv2, !SSLv3, !TLSv1, !TLSv1.1    # Default 3 = uncommented  #smtp_tls_CApath=/etc/ssl/certs  #smtp_tls_security_level=may  #smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache    # The below line prevents you from being an open relay, which means that your mail server wont forward email on behalf of anyone towards any destination, like open relays do.  # This line tells Postfix to forward email only from clients in trusted networks, from clients that have authenticated with SASL, or to domains that are configured as authorized relay destinations.  smtpd_relay_restrictions = permit_mynetworks permit_sasl_authenticated defer_unauth_destination  myhostname = mail.example.com  alias_maps = hash:/etc/aliases  alias_database = hash:/etc/aliases  myorigin = /etc/mailname  mydestination = $myhostname, localhost.$mydomain, localhost  relayhost =  mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128  mailbox_size_limit = 0  recipient_delimiter = +  inet_interfaces = all  inet_protocols = ipv4  message_size_limit = 104857600    #### Linuxbabe  # Tell Postfix to deliver incoming emails to local message store via the Dovecot LMTP server.  mailbox_transport = lmtp:unix:private/dovecot-lmtp  # Disable SMTPUTF8 in Postfix, because Dovecot-LMTP doesn't support this email extension  smtputf8_enable = no    ### Linuxbabe Chapter 3  virtual_mailbox_domains = proxy:mysql:/etc/postfix/sql/mysql_virtual_domains_maps.cf  virtual_mailbox_maps =     proxy:mysql:/etc/postfix/sql/mysql_virtual_mailbox_maps.cf,     proxy:mysql:/etc/postfix/sql/mysql_virtual_alias_domain_mailbox_maps.cf  virtual_alias_maps =     proxy:mysql:/etc/postfix/sql/mysql_virtual_alias_maps.cf,     proxy:mysql:/etc/postfix/sql/mysql_virtual_alias_domain_maps.cf,     proxy:mysql:/etc/postfix/sql/mysql_virtual_alias_domain_catchall_maps.cf    ### Linuxbabe chapter 3  virtual_transport = lmtp:unix:private/dovecot-lmtp    ### Linuxbabe Chapter 3  # The first line defines the base location of mail files. The remaining 3 lines  # define which user ID and group ID Postfix will use when delivering incoming  # emails to the mailbox. We use the user ID 2000 and group ID 2000.  virtual_mailbox_base = /var/vmail  virtual_minimum_uid = 2000  virtual_uid_maps = static:2000  virtual_gid_maps = static:2000    ### Linuxbabe Chapter 3 Ending  # By default, any local user can use the sendmail binary to submit outgoing emails.  # Now that your mail server is using virtual mailboxes, you might want to restrict  # access to the sendmail binary to trusted local users only, so a malicious user  # can't use it to send a large volume of emails to damage your mail server's reputation  authorized_submit_users = root,www-data,ubuntu,netdata    # deliver emails to local message store via the dovecot LMTP server (roundcube tutorial, sieve section, linuxbabe)  mailbox_transport = lmtp:unix:private/dovecot-lmtp  # disables SMTPUTF8 in Postfix, because Dovecot-LMTP doesn't support this email extension (roundcube tutorial, seive section, linuxbabe)  smtputf8_enable = no    ### Linuxbabe, rouncube tutorial, sieve section  # Help remove sensitive info (such as roundcube version number) from email headers  # This line is overridden by smtp_header_checks line at bottom of page  #smtp_header_checks = regexp:/etc/postfix/smtp_header_checks    ### LINUXBABE CHAPTER 4  # The first line specifies the Postfix policy agent timeout setting. The following lines will impose a restriction on incoming emails by rejecting unauthorized email and checking SPF record.  policyd-spf_time_limit = 3600  smtpd_recipient_restrictions =     permit_mynetworks,     permit_sasl_authenticated,     reject_unauth_destination,     check_policy_service unix:private/policyd-spf,  ### LinuxBabe Chapter "7 Effective Tips for Blocking Email Spam with Postfix SMTP Server"     # Make Postfix use the Postgrey policy server.     check_policy_service inet:127.0.0.1:10023,     # Tip #6, Whitelisting     check_client_access hash:/etc/postfix/rbl_override,  ### LinuxBabe Chapter ""7 Effective Tips for Blocking Email Spam with Postfix SMTP Server"     # Tip #6: Using Public Realtime Blacklists     reject_rhsbl_helo dbl.spamhaus.org,     reject_rhsbl_reverse_client dbl.spamhaus.org,     reject_rhsbl_sender dbl.spamhaus.org,     # The single line below is for public whitelisting, with dnswl whitelisting by ip address     permit_dnswl_client list.dnswl.org=127.0.[0..255].[1..3],     # The single line below is for whitelisting, however, if using spamhaus.org for blacklisting, then you don't need this on a whitelist as it is impossible for an IP address to be listed in Spamhaus whitelist and blacklist at the same time.     #permit_dnswl_client swl.spamhaus.org,     reject_rbl_client zen.spamhaus.org    ### LINUXBABE CHAPTER 4  # Milter configuration for Postfix to be able to call OpenDKIM via the milter protocol.  #milter_default_action = accept  #milter_protocol = 6  #smtpd_milters = local:opendkim/opendkim.sock  #non_smtpd_milters = $smtpd_milters    ### Linuxbabe Chapter "7 Effective Tips for Blocking Email Spam with Postfix SMTP Server"  ## Tip #1  smtpd_sender_restrictions =     permit_mynetworks     permit_sasl_authenticated  ## Tip #4     # Add the following line to reject email if the domain name of the address supplied with the MAIL FROM command has neither MX record nor A record.     reject_unknown_sender_domain     # This directive rejects an email if the client IP address has no PTR Record.     reject_unknown_reverse_client_hostname  ## Tip #3     # A legitimate email server should have an IP address returned from an A record, that matches the IP address of the email server.     # To filter out emails from hosts that don't have a valid A record (IP doesnt match that of the server) add the following two lines below.     #reject_unknown_reverse_client_hostname # Duplicate from above.     reject_unknown_client_hostname     # ^^^ NOTE: reject_unknown_client_hostname does not require HELO from SMTP client. It will fetch the hostname from PTR record, then check the A record.    ## Tip #2  # Add the following line to require the client to provide a HELO/EHLO hostname.  smtpd_helo_required = yes  # Add the following 3 lines to enable smtpd_helo_restrictions  smtpd_helo_restrictions =     permit_mynetworks     permit_sasl_authenticated     # Ocasionally, a legitimate mail server doesn't have a valid A record for the HELO/EHLO hostname. You need to whitelist them with the line below ("check_helo_access" directive).     # Don't forget to add whitelisted domains the the helo_access whitelist file, located at /etc/postfix/helo_access     check_helo_access hash:/etc/postfix/helo_access     # Use the following line to reject clients who provide malformed HELO/EHLO hostname.     reject_invalid_helo_hostname     # Use the following line to reject non-fully qualified HELO/EHLO hostname.     reject_non_fqdn_helo_hostname     # Use the following line to reject emails when the HELO/EHLO hostname has neither DNS A records nor MX Records.     reject_unknown_helo_hostname    ### Linuxbabe, Chapter "Block Email Spam with Postfix and SpamAssassin Content Filter"  # Use header checks with PCRE  header_checks = pcre:/etc/postfix/header_checks  body_checks = pcre:/etc/postfix/body_checks    # Milter configuration (Note: Order of these matters. Don't change the order.)  # https://www.linuxbabe.com/mail-server/block-email-spam-check-header-body-with-postfix-spamassassin  milter_default_action = accept  milter_protocol = 6  smtpd_milters = local:opendkim/opendkim.sock,local:opendmarc/opendmarc.sock,local:spamass/spamass.sock  # If you haven't configured openDmarc you should remove local:opendmarc/opendmarc.sock from smtpd_milters in /etc/postfix/main.cf  #smtpd_milters = local:opendkim/opendkim.sock,local:spamass/spamass.sock  non_smtpd_milters = $smtpd_milters    ### LInuxbabe "Spamassassin and Sieve Chapter"  # You can use smtp_header_checks to delete email headers that could show sensitive information.  # smtp_header_checks are only applied when Postfix is acting as an SMTP client, so it won't affect incoming emails.  # You might not want the recipient to know that you are using SpamAssassin on your mail server.  # So add the following line, which tells Postfix to delete the X-Spam-Status and X-Spam-Checker-Version header  # from the email message when sending emails  smtp_header_checks = pcre:/etc/postfix/smtp_header_checks    ### Linuxbabe "Postfix-amavis" chapter  # This tells Postfix to turn on content filtering by sending every incoming email message to Amavis, which listens on 127.0.0.1:10024.  content_filter = smtp-amavis:[127.0.0.1]:10024  # This will delay Postfix connection to content filter until the entire email message has been received, which can prevent content filters from wasting time and resources for slow SMTP clients.  smtpd_proxy_options = speed_adjust    # This setting was added by boo to fix "SPF: HELO does not publish an SPF Record" on mail-tester.com  smtp_helo_name = $mail.example.com  

What does this warning mean and how can I fix it? Do I have errors somewhere in my postfix files and what exactly should I be looking for?

Thanks for any help!

Sendmail 8.15 is not rewriting sender domain

Posted: 15 Jun 2022 02:02 PM PDT

Sendmail 8.15 on a RHEL 8 system. I am able to send mail out, but when I receive it on the other end, the 'from' address is user@FQDN instead of user@MASQUARADE_AS

what we get: user@realhosname.realdomain.com

what we want: user@somedomain.com.au

sendmail.mc config contains these among other macros:

MASQUERADE_AS(`somedomain.com.au')dnl  LOCAL_DOMAIN(`localhost.localdomain')dnl  EXPOSED_USER(`root')dnl  FEATURE(masquerade_envelope)dnl  MASQUERADE_DOMAIN(somedomain.com.au)dnl  

we do want the root user exposed as from the FQDN for management reasons, but we want all other users to show as user@MASQUERADE_AS

Testing the configuration, thing appear as they ought:

> =SHdrFromL  R< @ >          MAILER-DAEMON  R@ < @ $* >             MAILER-DAEMON  R$+             $: $> AddDomain $1  R$*             $: $> MasqHdr $1  > /tryflags HS  > /try local user  Trying header sender address user for mailer local  canonify           input: user  Canonify2          input: user  Canonify2        returns: user  canonify         returns: user  1                  input: user  1                returns: user  HdrFromL           input: user  AddDomain          input: user  AddDomain        returns: user < @ *LOCAL* >  MasqHdr            input: user < @ *LOCAL* >  MasqHdr          returns: user < @ somedomain . com . au . >  HdrFromL         returns: user < @ somedomain . com . au . >  final              input: user < @ somedomain . com . au . >  final            returns: user @ somedomain . com . au  Rcode = 0, addr = user@somedomain.com.au  > /try esmtp user  Trying header sender address user for mailer esmtp  canonify           input: user  Canonify2          input: user  Canonify2        returns: user  canonify         returns: user  1                  input: user  1                returns: user  HdrFromSMTP        input: user  PseudoToReal       input: user  PseudoToReal     returns: user  MasqSMTP           input: user  MasqSMTP         returns: user < @ *LOCAL* >  MasqHdr            input: user < @ *LOCAL* >  MasqHdr          returns: user < @ somedomain . com . au . >  HdrFromSMTP      returns: user < @ somedomain . com . au . >  final              input: user < @ somedomain . com . au . >  final            returns: user @ somedomain . com . au  Rcode = 0, addr = user@somedomain.com.au  >  

Scratching my head, here. Are the other config directives I need to be looking at?

And, yes, I did run 'make' after editing the sendmail.mc file. :)

somedomain.com.au is an MX for the externally visible exit point for our mail (spf records, etc.).

Tunneling a LEMP server through an oracle instance with wireguard. Cannot get Nginx Certbot certificates on LEMP server. How to debug?

Posted: 15 Jun 2022 03:12 PM PDT

I have a LEMP server at home running Ubuntu 22.02 and an Oracle cloud instance running Ubuntu 20.04. The Oracle cloud instance is acting as a Wireguard server. The LEMP server at home is acting as the Wireguard Client, and is being tunneled through the Oracle server in order to obtain an IP address that is different from my home IP address. I have set up this Wireguard Client/Server configuration per this Linuxbabe.com tutorial. The wireguard client is up and running and can successfully ping the Oracle Server. The LEMP Server (Wireguard client) also successfully resolves to the Oracle Servers Public IP address. I have also installed openresolv on the VPN Client, and bind9 on the VPN server to use the Oracle Instance's DNS from the LEMP server (the Wireguard Client). My Domain Registrar is pointing it's DNS to the same IP as my Oracle Server. Now, I am attempting to install prosody on the LEMP server, and cannot obtain certificates through the certbot nginx plugin. It seems as though something is blocking port 80/443 and that port 80/443 is not open. When I run (on my LEMP server) the command:

sudo certbot -v --nginx --agree-tos --redirect --hsts --staple-ocsp --email example@example.com -d chat.example.com  

I get the following error output:

Saving debug log to /var/log/letsencrypt/letsencrypt.log  Plugins selected: Authenticator nginx, Installer nginx  Requesting a certificate for chat.example.com  Performing the following challenges:  http-01 challenge for chat.example.com  Waiting for verification...  Challenge failed for domain chat.example.com  http-01 challenge for chat.example.com    Certbot failed to authenticate some domains (authenticator: nginx). The Certificate Authority reported these problems:    Domain: chat.example.com    Type:   connection    Detail: 150.136.56.232: Fetching http://chat.example.com/.well-known/acme-challenge/GlzBhvxB_hDYefMW48qaHq3I-qc_NArj7VWml54bofM: Connection refused    Hint: The Certificate Authority failed to verify the temporary nginx configuration changes made by Certbot. Ensure the listed domains point to this nginx server and that it is accessible from the internet.    Cleaning up challenges  Some challenges have failed.  Ask for help or search for solutions at https://community.letsencrypt.org. See the logfile /var/log/letsencrypt/letsencrypt.log or re-run Certbot with -v for more details.  

I am using UFW as my firewall, and my UFW status on the LEMP Server (the VPN Client) is:

Status: active         To                         Action      From       --                         ------      ----  [ 1] 22/tcp                     ALLOW IN    my.local.lan.ip/24  [ 2] 43211/tcp                  ALLOW IN    my.local.lan.ip/24  [ 3] 5222,5269/tcp              ALLOW IN    Anywhere  [ 4] 80,443/tcp                 ALLOW IN    Anywhere  [ 5] 5222,5269/tcp (v6)         ALLOW IN    Anywhere (v6)  [ 6] 80,443/tcp (v6)            ALLOW IN    Anywhere (v6)  

My UFW Status on the Oracle Cloud Instance is:

Status: active         To                         Action      From       --                         ------      ----  [ 1] Anywhere                   ALLOW IN    10.10.10.0/24  [ 2] 22/tcp                     ALLOW IN    my.home.public.ip  [ 3] 22/tcp                     ALLOW IN    my.work.public.ip  [ 4] 5222,5269/tcp              ALLOW IN    Anywhere  [ 5] 51820/udp                  ALLOW IN    my.home.public.ip  [ 6] 80,443/tcp                 ALLOW IN    Anywhere  [ 7] 5222,5269/tcp (v6)         ALLOW IN    Anywhere (v6)  [ 8] 80,443/tcp (v6)            ALLOW IN    Anywhere (v6)  

Port 51820/udp is my wireguard port for both the Oracle Instance, as well as the Lemp Server. I have also forwarded my public IP address on the Oracle Instance to the VPN client, so that the client can send and receive on the same public port that the Oracle Instance uses. Below is my /etc/ufw/before.rules file. The port forwarding modifications I made are under the comments entitled "Linuxbabe...".

#  # rules.before  #  # Rules that should be run before the ufw command line added rules. Custom  # rules should be added to one of these chains:  #   ufw-before-input  #   ufw-before-output  #   ufw-before-forward  #    # Don't delete these required lines, otherwise there will be errors  *filter  :ufw-before-input - [0:0]  :ufw-before-output - [0:0]  :ufw-before-forward - [0:0]  :ufw-not-local - [0:0]  # End required lines      # allow all on loopback  -A ufw-before-input -i lo -j ACCEPT  -A ufw-before-output -o lo -j ACCEPT    # quickly process packets for which we already have a connection  -A ufw-before-input -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT  -A ufw-before-output -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT  -A ufw-before-forward -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT    # drop INVALID packets (logs these in loglevel medium and higher)  -A ufw-before-input -m conntrack --ctstate INVALID -j ufw-logging-deny  -A ufw-before-input -m conntrack --ctstate INVALID -j DROP    # ok icmp codes for INPUT  -A ufw-before-input -p icmp --icmp-type destination-unreachable -j ACCEPT  -A ufw-before-input -p icmp --icmp-type time-exceeded -j ACCEPT  -A ufw-before-input -p icmp --icmp-type parameter-problem -j ACCEPT  -A ufw-before-input -p icmp --icmp-type echo-request -j ACCEPT    # ok icmp code for FORWARD  -A ufw-before-forward -p icmp --icmp-type destination-unreachable -j ACCEPT  -A ufw-before-forward -p icmp --icmp-type time-exceeded -j ACCEPT  -A ufw-before-forward -p icmp --icmp-type parameter-problem -j ACCEPT  -A ufw-before-forward -p icmp --icmp-type echo-request -j ACCEPT    ## Linuxbabe tutorial  # allow forwarding for trusted network  -A ufw-before-forward -s 10.10.10.0/24 -j ACCEPT  -A ufw-before-forward -d 10.10.10.0/24 -j ACCEPT    # allow dhcp client to work  -A ufw-before-input -p udp --sport 67 --dport 68 -j ACCEPT    #  # ufw-not-local  #  -A ufw-before-input -j ufw-not-local    # if LOCAL, RETURN  -A ufw-not-local -m addrtype --dst-type LOCAL -j RETURN    # if MULTICAST, RETURN  -A ufw-not-local -m addrtype --dst-type MULTICAST -j RETURN    # if BROADCAST, RETURN  -A ufw-not-local -m addrtype --dst-type BROADCAST -j RETURN    # all other non-local packets are dropped  -A ufw-not-local -m limit --limit 3/min --limit-burst 10 -j ufw-logging-deny  -A ufw-not-local -j DROP    # allow MULTICAST mDNS for service discovery (be sure the MULTICAST line above  # is uncommented)  -A ufw-before-input -p udp -d 224.0.0.251 --dport 5353 -j ACCEPT    # allow MULTICAST UPnP for service discovery (be sure the MULTICAST line above  # is uncommented)  -A ufw-before-input -p udp -d 239.255.255.250 --dport 1900 -j ACCEPT    # don't delete the 'COMMIT' line or these rules won't be processed  COMMIT    ### Linuxbabe - UFW Tutorial  ## NAT table rules  *nat  # Nat Pre-Routing  :PREROUTING ACCEPT [0:0]  # !Http! Forward oracle.server.public.ip (Server Public IP) TCP port 80 to 10.10.10.2:80 (VPN Client IP)  -A PREROUTING -i ens3 -d oracle.server.public.ip  -p tcp --dport 80 -j DNAT --to-destination 10.10.10.2:80    # !HTTPS! Forward oracle.server.public.ip (Server Public IP) TCP port 443 to 10.10.10.2:443 (VPN Client IP)  -A PREROUTING -i ens3 -d oracle.server.public.ip  -p tcp --dport 443 -j DNAT --to-destination 10.10.10.2:443    # !PROSODY-5222! Forward oracle.server.public.ip (Server Public IP) TCP port 5222 to 10.10.10.2:5222 (VPN Client IP)  -A PREROUTING -i ens3 -d oracle.server.public.ip  -p tcp --dport 5222 -j DNAT --to-destination 10.10.10.2:5222    # !PROSODY-5269! Forward oracle.server.public.ip (Server Public IP) TCP port 5269 to 10.10.10.2:5269 (VPN Client IP)  -A PREROUTING -i ens3 -d oracle.server.public.ip  -p tcp --dport 5269 -j DNAT --to-destination 10.10.10.2:5269    # !PROSODY-BOSH-5280! Forward oracle.server.public.ip (Server Public IP) TCP port 5280 to 10.10.10.2:5280 (VPN Client IP)  -A PREROUTING -i ens3 -d oracle.server.public.ip  -p tcp --dport 5280 -j DNAT --to-destination 10.10.10.2:5280    # !PROSODY-BOSH-5281! Forward oracle.server.public.ip (Server Public IP) TCP port 5281 to 10.10.10.2:5281 (VPN Client IP)  -A PREROUTING -i ens3 -d oracle.server.public.ip  -p tcp --dport 5281 -j DNAT --to-destination 10.10.10.2:5281  COMMIT    ### Linuxbabe - Wireguard Tutorial  # Nat Postrouting  *nat  :POSTROUTING ACCEPT [0:0]  -A POSTROUTING -s 10.10.10.0/24 -o ens3 -j MASQUERADE    # End each table with the 'COMMIT' line or these rules won't be processed  COMMIT  

My /etc/nginx/nginx.conf file on the LEMP Server (the wireguard client) looks like this:

#user  nginx;  user www-data;  worker_processes  auto;    error_log  /var/log/nginx/error.log notice;  pid        /var/run/nginx.pid;      events {      worker_connections  1024;  }      http {      include       /etc/nginx/mime.types;      default_type  application/octet-stream;        log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '                        '$status $body_bytes_sent "$http_referer" '                        '"$http_user_agent" "$http_x_forwarded_for"';        access_log  /var/log/nginx/access.log  main;        sendfile        on;      #tcp_nopush     on;        keepalive_timeout  65;        #gzip  on;        include /etc/nginx/conf.d/*.conf;  }  

My /etc/nginx/conf.d/prosody.conf file on the LEMP Server (VPN Client) looks like this:

server {        listen 80;        listen [::]:80;        server_name chat.example.com;          root /var/www/prosody/;          location ~ /.well-known/acme-challenge {           allow all;        }  }  

Lastly, I have enabled IP Forwarding on the Oracle Server by uncommenting the line net.ipv4.ip_forward = 1 in /etc/sysctl.conf.

After all of this, my LEMP server seems to be successfully using the Oracle Instance's VPN Server Tunnel, HOWEVER, it still cannot obtain certificates from cerbot using the prosody.conf nginx config file. As far as I can tell, with all of the research I've done, this setup (most importantly the VPN forwarding rules in before.rules), should successfully allow my LEMP server to obtain certificates using the Oracle Instance's IP address. But it is NOT DOING SO!

So my question is, what must I do to debug this, what is blocking my LEMP servers port 80/443, and what must I do to successfully obtain certbot certificates for prosody using my Oracle Instance's Public IP address?

Cannot resize XFS partition after volume increase

Posted: 15 Jun 2022 01:46 PM PDT

I have increased the size of my volume from 8GB to 16GB. After resizing the volume, I would like to expand the partition to take on the additional space.

However, this operation is failing with data size unchanged, skipping.

$ lsblk  NAME    MAJ:MIN RM SIZE RO TYPE MOUNTPOINT  xvda    202:0    0  16G  0 disk  └─xvda1 202:1    0  16G  0 part /  
$ df -h  Filesystem      Size  Used Avail Use% Mounted on  devtmpfs        482M     0  482M   0% /dev  tmpfs           492M     0  492M   0% /dev/shm  tmpfs           492M  408K  492M   1% /run  tmpfs           492M     0  492M   0% /sys/fs/cgroup  /dev/xvda1       16G  7.6G  8.5G  48% /  tmpfs            99M     0   99M   0% /run/user/1000  
$ sudo xfs_growfs -d /  meta-data=/dev/xvda1             isize=512    agcount=9, agsize=524159 blks           =                       sectsz=512   attr=2, projid32bit=1           =                       crc=1        finobt=1 spinodes=0  data     =                       bsize=4096   blocks=4193787, imaxpct=25           =                       sunit=0      swidth=0 blks  naming   =version 2              bsize=4096   ascii-ci=0 ftype=1  log      =internal               bsize=4096   blocks=2560, version=2           =                       sectsz=512   sunit=0 blks, lazy-count=1  realtime =none                   extsz=4096   blocks=0, rtextents=0  data size unchanged, skipping  

The partition in question is /dev/xvda1. As you can see, the operation is "skipping" and the Avail parameter remains 8GB. What could be the issue?

Tomcat 9.0.36 https configuration to port 8443 not responding

Posted: 15 Jun 2022 02:03 PM PDT

I have a Tomcat 9.0.36 running on a Ubuntu 18.04.4 LTS (virtual)machine. I am trying to configure it to use a certificate that I got from a CA. I generated successfully the keystore file using keytool, and tried to configure Tomcat to listen on port 8443 for https traffic.

The relating fields in server.xms:

<Connector port="8080" protocol="HTTP/1.1"             connectionTimeout="20000"             redirectPort="8443" />    <Connector port="8443" protocol="HTTP/1.1" maxThreads="100"         scheme="https" secure="true"         SSLEnabled="true"         clientAuth="false"         sslProtocol="TLS"         keyAlias="correctAlias"         keystoreFile="/usr/lib/jvm/jdk-12.0.2/lib/security/keystorefile.jks"         keystorePass="correctPassWord" />  

I have tried with multiple configurations of both ports, (with or without redirectPort=..., commented out the other port etc.), and the end result is the same. On port 8080, everything works fine (when connector enabled), but on 8443, the result is:

This site can't be reached {my ip} took too long to respond.   

netstat -plnt gives followint:

Active Internet connections (only servers)  Proto Recv-Q Send-Q Local Address           Foreign Address           tcp   0  0 0.0.0.0:8080    0.0.0.0:*       LISTEN      22748/java  tcp   0  0 0.0.0.0:8443    0.0.0.0:*       LISTEN      22748/java  ...  

So I assume there is something that prevents server from responding, as catalina.out doesn't give any errors, just the startup information, that http-nio-8080 and http-jsse-nio-8443 have started, and everything seems to be runnig.

I am aiming for the end result to be that everything redirects to the https protocol, and nothing is unencrypted.

Change default interface docker container

Posted: 15 Jun 2022 05:02 PM PDT

I'm struggling with this problem for two days.

Assumptions:

  • Docker network (and interface) named docknet type bridge subnet 172.18.0.0/16
  • Two interfaces eth0 (Gateway IP: 192.168.1.1, Interface Static IP: 192.168.1.100) and eth1 (Gateway IP:192.168.2.1, Interface Static IP: 192.168.2.100)
  • Default routing goes through eth0

What I want:

  • Outgoing traffic from container attached to docknet must go to eth1

What I tried:

  • Default iptable rule created by docker left untouched:

-A POSTROUTING -s 172.18.0.0/16 ! -o docknet -j MASQUERADE

  • My rules:

iptables -t mangle -I PREROUTING -s 172.18.0.0/16 -j MARK --set-mark 1

ip rule add from all fwmark 1 table 2

Where table 2 is:

default via 192.168.2.1 dev eth1 proto static

With this setup when I try to ping 8.8.8.8 from a container (172.18.0.2) attached to docknet the following happens:

  • 172.18.0.2 gets translated to 192.168.2.1
  • the packet goes through eth1
  • the packet returns to eth1 with src addr 8.8.8.8 and dst 192.168.2.1

from here a reverse translation from 192.168.2.1 to 172.168.0.2 should happen but running tcpdump -i any host 8.8.8.8 there is not trace about this translation

I checked out also conntrack -L and this is the result:

icmp 1 29 src=172.18.0.2 dst=8.8.8.8 type=8 code=0 id=9 src=8.8.8.8 dst=192.168.2.1 type=0 code=0 id=9 mark=0 use=1

Useful info:

  • eth1 is actually a 4G usb dongle
  • ip forwarding is active
  • curl --interface eth1 ipinfo.io works as expected

EDIT:

output from ip -d link show eth1

eth1: mtu 1500 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000 link/ether 00:b0:d6:00:00:00 brd ff:ff:ff:ff:ff:ff promiscuity 0 addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535

Exchange 2013 CU18 install fails with ADObjectAlreadyExistsException

Posted: 15 Jun 2022 01:42 PM PDT

I'm updating from Exchange 2013 RTM to Exchange 2013 CU18.

Exchange upgrade is failing currently with the following error...

Microsoft Exchange Server 2013 Cumulative Update 18 Unattended Setup

Copying Files... File copy complete. Setup will now collect additional information needed for installation. Languages Mailbox role: Transport service Client Access role: Front End Transport service Mailbox role: Client Access service Mailbox role: Unified Messaging service Mailbox role: Mailbox service Management tools Client Access role: Client Access Front End service

Performing Microsoft Exchange Server Prerequisite Check

Configuring Prerequisites                                 COMPLETED  Prerequisite Analysis                                     COMPLETED  

Configuring Microsoft Exchange Server

Organization Preparation                                  FAILED   The following error was generated when "$error.Clear();      install-GlobalAddressLists -DomainController $RoleDomainController  

" was run: "Microsoft.Exchange.Data.Directory.ADObjectAlreadyExistsException: Ac tive Directory operation failed on HOSTEX-1.mydomain.cloud. The object 'CN= Default Global Address List,CN=All Global Address Lists,CN=Address Lists Contain er,CN=Hosted Exchange,CN=Microsoft Exchange,CN=Services,CN=Configuration ,DC=mydomain,DC=cloud' already exists. ---> System.DirectoryServices.Protocols. DirectoryOperationException: The object exists. at System.DirectoryServices.Protocols.LdapConnection.ConstructResponse(Int32 messageId, LdapOperation operation, ResultAll resultType, TimeSpan requestTimeOu t, Boolean exceptionOnTimeOut) at System.DirectoryServices.Protocols.LdapConnection.SendRequest(DirectoryReq uest request, TimeSpan requestTimeout) at Microsoft.Exchange.Data.Directory.PooledLdapConnection.SendRequest(Directo ryRequest request, LdapOperation ldapOperation, Nullable1 clientSideSearchTimeo ut, IActivityScope activityScope, String callerInfo) at Microsoft.Exchange.Data.Directory.ADDataSession.ExecuteModificationRequest (ADObject entry, DirectoryRequest request, ADObjectId originalId, Boolean emptyO bjectSessionOnException, Boolean isSync) --- End of inner exception stack trace --- at Microsoft.Exchange.Data.Directory.ADDataSession.AnalyzeDirectoryError(Pool edLdapConnection connection, DirectoryRequest request, DirectoryException de, In t32 totalRetries, Int32 retriesOnServer)
at Microsoft.Exchange.Data.Directory.ADDataSession.ExecuteModificationRequest (ADObject entry, DirectoryRequest request, ADObjectId originalId, Boolean emptyO bjectSessionOnException, Boolean isSync) at Microsoft.Exchange.Data.Directory.ADDataSession.Save(ADObject instanceToSa ve, IEnumerable
1 properties, Boolean bypassValidation)
at Microsoft.Exchange.Data.Directory.SystemConfiguration.ADConfigurationSessi on.Save(ADConfigurationObject instanceToSave) at Microsoft.Exchange.Management.Deployment.InstallGlobalAddressLists.CreateD efaultGal(ADObjectId defaultGal) at Microsoft.Exchange.Management.Deployment.InstallGlobalAddressLists.Interna lProcessRecord() at Microsoft.Exchange.Configuration.Tasks.Task.b__b()
at Microsoft.Exchange.Configuration.Tasks.Task.InvokeRetryableFunc(String fun cName, Action func, Boolean terminatePipelineIfFailed)".

The Exchange Server setup operation didn't complete. More details can be found in ExchangeSetup.log located in the :\ExchangeSetupLogs folder.

Any help at all is appriciated!

Connection to server times out with SSH and takes down any possible connection to the same server within the working network

Posted: 15 Jun 2022 07:04 PM PDT

After updating and re-initiating gcloud and its SSH connections with gcloud update, gcloud init and then gcloud compute config-ssh --remove && gcloud compute config-ssh, sometimes and unpredictably any connection to the server, let's say ssh hostname sometimes develops as follows:

The configuration on vim ~/.ssh/config regarding Google Cloud Platform is:

 20 # Google Compute Engine Section   21 #   22 # The following has been auto-generated by "gcloud compute config-ssh"   23 # to make accessing your Google Compute Engine virtual machines easier.   24 #   25 # To remove this blob, run:   26 #   27 #   gcloud compute config-ssh --remove   28 #   29 # You can also manually remove this blob by deleting everything from   30 # here until the comment that contains the string "End of Google Compute   31 # Engine Section".   32 #   33 # You should not hand-edit this section, unless you are deleting it.   34 #   35 Host hostname   36     HostName XXX.XXX.XX.XX   37     IdentityFile /Users/userz/.ssh/google_compute_engine   38     UserKnownHostsFile=/Users/userz/.ssh/google_compute_known_hosts   39     HostKeyAlias=compute.1736255031467791084   40     IdentitiesOnly=yes   41     CheckHostIP=no   42   43 # End of Google Compute Engine Section  

And the output of ssh -vvv hostname is:

OpenSSH_7.5p1, OpenSSL 1.0.2l  25 May 2017  debug1: Reading configuration data /Users/userz/.ssh/config  debug1: /Users/userz/.ssh/config line 35: Applying options for hostname  debug1: Reading configuration data /usr/local/etc/ssh/ssh_config  debug2: resolving "XXX.XXX.XX.XX" port 22  debug2: ssh_connect_direct: needpriv 0  debug1: Connecting to XXX.XXX.XX.XX [XXX.XXX.XX.XX] port 22.  debug1: connect to address XXX.XXX.XX.XX port 22: Operation timed out  ssh: connect to host XXX.XXX.XX.XX port 22: Operation timed out  

After that, any connection to the server, even going to the domain.com, trying telnet XXX.XXX.XX.XX 22 or ping XXX.XXX.XX.XXresults in a timeout from anyone within the local network.

What can I do to solve this? Or how can I start to debug this one?


System information:

$ gcloud -v    Google Cloud SDK 173.0.0  alpha 2017.09.15  beta 2017.09.15  bq 2.0.26  core 2017.09.25  gcloud  gsutil 4.27  

MacOS Sierra 10.12.6

What's the correct way to force RPM erase & Install instead of upgrade?

Posted: 15 Jun 2022 02:45 PM PDT

I have an RPM whose software recently had its database schema completely redesigned. The current version (We'll call it B.0) is incompatible with older versions of the database schema.

Fortunately it's still in beta, and since it's not officially released I'd like to avoid writing any database migration scripts unless (and until) it actually becomes necessary to do so.

An upgrade will fail, because the (new) database schema creation happens in a scriptlet that is never run during an Upgrade, which puts the application in a bad state. If A.x is installed, I want them to erase and re-install.

I have found RPM's "Conflicts" tag, which does seem to do the job, but the error message is misleading "file ____ conflicts with file from package A.x" and someone with enough savvy might try to Upgrade and replace conflicts, which isn't useful as the required scriptlets to create the database still won't be run.

Is it possible to print a more descriptive error message when this Conflict occurs? Is there a "right way to do this" alternative?

Is it possible to expose an HTTP proxy on localhost via ngrok?

Posted: 15 Jun 2022 01:46 PM PDT

I have tinyproxy installed on localhost (I also tried squid and failed similarly). It runs on port 8888. This works:

curl -x localhost:8888 http://www.google.com  

And I see output in the tinyproxy logs.

Now I run the latest version of ngrok and get the following forwarding established:

http://<identifier>.ngrok.io -> localhost:8888  

If I open http://<identifier>.ngrok.io on my browser the ngrok connection counter goes up, and I successfully see tinyproxy's "not implemented" error page.

However, this fails:

curl -x <identifier>.ngrok.io:80 http://www.google.com  # => curl: (56) Received HTTP code 404 from proxy after CONNECT  

The ngrok counter does not go up, and I do not see messages in the tinyproxy logs, suggesting that the failure is before ngrok accepted the connection.

What am I missing here? I had assumed that both ngrok and tinyproxy forward the HTTP request, but it seems like there might be another protocol operating behind the scenes of curl -x. Is there a way to successfully establish the proxy chain starting at ngrok and going through my localhost?

How to dismount a VHD that has been absorbed into a Storage Space

Posted: 15 Jun 2022 01:00 PM PDT

Normally in Windows, you can create a Virtual Hard Disk (VHD), see it in the Disk Management, right-click, and remove it:

enter image description here

No problem.

Storage Spaces Hides Disks

I want to test the resiliency of Storage Spaces, and the ability to migrate them between servers. Once a disk has been added to a Storage Pool, it no longer appears in Disk Management. I need to surprise remove a VHD from the server, but there's no UI option to do it.

If i were using a physical disk, it would be trivial to disconnect the drive from the computer (thus ensuring that Storage Spaces doesn't expect it, and has no chance to rebalance beforehand):

enter image description here

How to unplug a vhd?

But once a disk is added to a storage pool, there is no longer any UI options for that disk. I don't want to use the Storage Spaces UI to Remove a physical disk from the pool:

enter image description here

Because:

  • storage spaces will migrate data from that disk onto other disks in the pool (defeating the point of a surprise removal)
  • storage spaces will migrate data from that disk onto other disks in the pool (defeating the point of the test of migrating disks to another server)

enter image description here

I need the surprise removal

In the same way i can unplug a physical hard disk, how do i unplug a virtual hard disk?

Microsoft has the exact PowerShell commandlet to disconnect a vhd:

Dismount-VHD

Dismounts a virtual hard disk.

PS C:> Dismount-VHD –Path c:\test\testvhdx.vhdx

Unfortunately it doesn't work - for the same reason. The VHD is "gone", "invisible", absorbed into the Storage Pool, gone from any ability to interact with it:

PS C:\Windows\system32> Dismount-VHD –Path D:\VirtualMachines\StorageSpaceTest\StorageSpaceTest1.vhdx  Dismount-VHD : Windows cannot dismount the disk because it is not mounted.  At line:1 char:1  + Dismount-VHD –Path D:\VirtualMachines\StorageSpaceTest\StorageSpaceTest1.vhdx  + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~      + CategoryInfo          : InvalidArgument: (Microsoft.Vhd.P...mountVhdCommand:DismountVhdCommand) [Dismount-VHD], VirtualizationOperationFailedException      + FullyQualifiedErrorId : InvalidParameter,Microsoft.Vhd.PowerShell.DismountVhdCommand  

It seems once a VHD has been added to a storage pool, it cannot be dismounted.

How do i dismount a VHD that has been added to a storage pool?

DiskPart

The command line tool diskpart has a command to detach a virtual disk:

DISKPART> select vdisk file="F:\Foo\Contoso37.vhd"    DiskPart successfully selected the virtual disk file.    DISKPART> detach vdisk    Virtual Disk Service error:  The virtual disk is already detached.  

But it fails for the same reason, the .vhdx has already been "detached" when it went into the StoragePool.

Bonus Chatter

The current VHDX files are sitting in D:\..., which itself is a Parity Space. I cannot simply turn off the PC and simply rename the files. Not only is turning off the server not a viable option right now, it defeats the point of the test: having Storage Spaces react while it is running

Edit: Same thing happens in Windows Server 2016

Mod_evasive not blocking a DOS attack using HEAD requests

Posted: 15 Jun 2022 05:02 PM PDT

Using Apache/2.2.15 on RHEL6 with mod_evasive config:

DOSHashTableSize    3097  DOSPageCount        14  DOSPageInterval     2  DOSSiteCount        70  DOSSiteInterval     1  DOSBlockingPeriod   60  

Unfortunately it didn't block this attack, which only came from 1 IP:

207.xxx.xxx.xxx - - [14/Jun/2015:06:06:54 +0400] "HEAD / HTTP/1.1" 200 - "-" "some fake user agent"  207.xxx.xxx.xxx - - [14/Jun/2015:06:06:53 +0400] "HEAD / HTTP/1.1" 200 - "-" "some fake user agent"  207.xxx.xxx.xxx - - [14/Jun/2015:06:06:53 +0400] "HEAD / HTTP/1.1" 200 - "-" "some fake user agent"  207.xxx.xxx.xxx - - [14/Jun/2015:06:06:54 +0400] "HEAD / HTTP/1.1" 200 - "-" "some fake user agent"  207.xxx.xxx.xxx - - [14/Jun/2015:06:06:53 +0400] "HEAD / HTTP/1.1" 200 - "-" "some fake user agent"  207.xxx.xxx.xxx - - [14/Jun/2015:06:06:54 +0400] "HEAD / HTTP/1.1" 200 - "-" "some fake user agent"  207.xxx.xxx.xxx - - [14/Jun/2015:06:06:54 +0400] "HEAD / HTTP/1.1" 200 - "-" "some fake user agent"  207.xxx.xxx.xxx - - [14/Jun/2015:06:06:54 +0400] "HEAD / HTTP/1.1" 200 - "-" "some fake user agent"  207.xxx.xxx.xxx - - [14/Jun/2015:06:06:54 +0400] "HEAD / HTTP/1.1" 200 - "-" "some fake user agent"  207.xxx.xxx.xxx - - [14/Jun/2015:06:06:54 +0400] "HEAD / HTTP/1.1" 200 - "-" "some fake user agent"  207.xxx.xxx.xxx - - [14/Jun/2015:06:06:54 +0400] "HEAD / HTTP/1.1" 200 - "-" "some fake user agent"  207.xxx.xxx.xxx - - [14/Jun/2015:06:06:54 +0400] "HEAD / HTTP/1.1" 200 - "-" "some fake user agent"  207.xxx.xxx.xxx - - [14/Jun/2015:06:06:54 +0400] "HEAD / HTTP/1.1" 200 - "-" "some fake user agent"  207.xxx.xxx.xxx - - [14/Jun/2015:06:06:54 +0400] "HEAD / HTTP/1.1" 200 - "-" "some fake user agent"  207.xxx.xxx.xxx - - [14/Jun/2015:06:06:54 +0400] "HEAD / HTTP/1.1" 200 - "-" "some fake user agent"  207.xxx.xxx.xxx - - [14/Jun/2015:06:06:54 +0400] "HEAD / HTTP/1.1" 200 - "-" "some fake user agent"  207.xxx.xxx.xxx - - [14/Jun/2015:06:06:54 +0400] "HEAD / HTTP/1.1" 200 - "-" "some fake user agent"  207.xxx.xxx.xxx - - [14/Jun/2015:06:06:54 +0400] "HEAD / HTTP/1.1" 200 - "-" "some fake user agent"  207.xxx.xxx.xxx - - [14/Jun/2015:06:06:54 +0400] "HEAD / HTTP/1.1" 200 - "-" "some fake user agent"  207.xxx.xxx.xxx - - [14/Jun/2015:06:06:54 +0400] "HEAD / HTTP/1.1" 200 - "-" "some fake user agent"  207.xxx.xxx.xxx - - [14/Jun/2015:06:06:54 +0400] "HEAD / HTTP/1.1" 200 - "-" "some fake user agent"  207.xxx.xxx.xxx - - [14/Jun/2015:06:06:54 +0400] "HEAD / HTTP/1.1" 200 - "-" "some fake user agent"  207.xxx.xxx.xxx - - [14/Jun/2015:06:06:54 +0400] "HEAD / HTTP/1.1" 200 - "-" "some fake user agent"  

Mod_evasive does work, it blocks some IPs in other cases. Does it not work for HEAD requests?

EDIT: My apache is running in prefork mode. From what I've read mod_evasive has issues with it.

How can I install nomachine over ssh on Suse (not openSuse)?

Posted: 15 Jun 2022 03:01 PM PDT

I have spent about 2 hours with no luck on this one. I want to use the NX protocol for remote desktop because it is much faster, so I hear, when going over the internet. And their client allows for me to easily restore disconnected sessions, another plus for flaky internet connections.

I download the latest from www.nomachine.com and it will not allow me to connect over port port 22. Port 22/ssh is my only option as arbitrary ports such as 4000 are blocked on the server which this is being installed on. I have tried downloading and installing the latest server versions on the linux machine but the main one, and enterprise version, do not allow ssh. I have also tried looking for previous versions such as 3.5 and cannot find an installer for that either. I looked through various freeNX sites and rpm repos but they all have deadlinks, readme.txt's stating the files are no hosted for some random reason, or point back to nomachine.com, which no longer has. I have spent about 2 hours on this total and feel like a major retard for trying to install something so seemingly trivial...

xen-create-image does not create inird or initramfs image and domU does not starts with system image

Posted: 15 Jun 2022 07:04 PM PDT

I have Fedora 19 as Dom0. To create image I run

# xen-create-image --hostname=debian-wheezy --memory=512Mb --dhcp --size=20Gb --swap=512Mb --dir=/xen --arch=amd64 --dist=wheezy

After generation finished I start vm and see:

# xl create /etc/xen/debian-wheezy.cfg  Parsing config from /etc/xen/debian-wheezy.cfg  libxl: error: libxl_dom.c:409:libxl__build_pv: xc_dom_ramdisk_file failed: No such file or directory  libxl: error: libxl_create.c:919:domcreate_rebuild_done: cannot (re-)build domain: -3  

In the /etc/xen/debian-wheezy.cfg i have

#  #  Kernel + memory size  #  kernel      = '/boot/vmlinuz-3.11.2-201.fc19.x86_64'  ramdisk     = '/boot/initrd.img-3.11.2-201.fc19.x86_64'  

and ls -1 /boot/*201* shows

/boot/config-3.11.2-201.fc19.x86_64  /boot/initramfs-3.11.2-201.fc19.x86_64.img  /boot/System.map-3.11.2-201.fc19.x86_64  /boot/vmlinuz-3.11.2-201.fc19.x86_64  

Then if I fix ramdisk directive in .cfg file to /boot/initramfs-3.11.2-201.fc19.x86_64.img vm will start but os inside will not boot. In a tail of xl console I get

[  OK  ] Reached target Basic System.  dracut-initqueue[130]: Warning: Could not boot.  dracut-initqueue[130]: Warning: /dev/disk/by-uuid/085883ad-73ca-45cc-8bc5-e6249f869b26 does not exist  dracut-initqueue[130]: Warning: /dev/fedora/root does not exist  dracut-initqueue[130]: Warning: /dev/fedora/swap does not exist  dracut-initqueue[130]: Warning: /dev/mapper/fedora-root does not exist  dracut-initqueue[130]: Warning: /dev/mapper/fedora-swap does not exist  dracut-initqueue[130]: Warning: /dev/xvda2 does not exist  Starting Dracut Emergency Shell...  Warning: /dev/disk/by-uuid/085883ad-73ca-45cc-8bc5-e6249f869b26 does not exist  Warning: /dev/fedora/root does not exist  Warning: /dev/fedora/swap does not exist  Warning: /dev/mapper/fedora-root does not exist  Warning: /dev/mapper/fedora-swap does not exist  Warning: /dev/xvda2 does not exist    Generating "/run/initramfs/sosreport.txt"      Entering emergency mode. Exit the shell to continue.  Type "journalctl" to view system logs.  You might want to save "/run/initramfs/sosreport.txt" to a USB stick or /boot  after mounting them and attach it to a bug report.      dracut:/#  

.img files in /xen/domains/debian-wheezy exists and listed in disk section of debian-wheezy.cfg

So what should i do?

Update:

I've found that xl does not mount images. In debian-wheezy.cfg I have that:

root        = '/dev/xvda2 ro'  disk        = [                    'file:/xen/domains/debian-wheezy/disk.img,xvda2,w',                    'file:/xen/domains/debian-wheeze/swap.img,xvda1,w',                ]  

And there is no /dev/xvda* or /dev/sda* or /dev/hda* files in VM.

Why am I getting a Sudden Drop in throughput and sluggishness with no CPU increase

Posted: 15 Jun 2022 03:01 PM PDT

Occasionally, during random parts of the day, I get a 10 minute period of extreme sluggishness where my requests are taking 50-1000 times longer then they normally do. Note: I am on Apache/2.2.16 (Debian), running PHP 5.3.3

Newrelic shows that the time is not spent in the Database, it's supposedly spent while PHP is executing before the first line of code (according to some traces). At the same time, I see a huge drop in throughput to nearly 1/3 the normal amount.

When I look at the graphs, I can see that CPU, Memory, Disk IO, and CPU waitIO are all at steady levels: No spikes at all. I don't see any error messages in the error log for PHP or the web server during that time. The server has more then enough memory, according to newrelic it's only using about 25%. Total memory is 3.3 GB.

Note: The load average is about .25 on two cores, hence load is fairly low. I typically get about 1000-1500 requests per minute. response times are usually 15ms to 150ms.

here are some of my apache configs:

<IfModule mpm_worker_module>      StartServers          2      MinSpareThreads      25      MaxSpareThreads      75      ThreadLimit          64      ThreadsPerChild      25      MaxClients          550      MaxRequestsPerChild   0  </IfModule>     <IfModule mpm_event_module>       StartServers          2       MaxClients          550       MinSpareThreads      25       MaxSpareThreads      75       ThreadLimit          64       ThreadsPerChild      25       MaxRequestsPerChild   0   </IfModule>  

MaxClients is set that high becuase our average memory per process is very low: about 1-4mb

The only explanation I can think of is that my Host is dropping connectivity or is having some sort of connectivity issue. Which wouldn't surprise me, since this host (rimuhosting) has been less then reliable.

Is there any other possible explanation?

Publishing a web app listening on two different ports with TMG

Posted: 15 Jun 2022 02:03 PM PDT

We have an internal web app built on some custom Java stuff. The app listens on port 8080, but also uses IIS on port 80 to get user authentication.

Opening the site in a browser produces HTTP GETs to both ports/urls.

Is a case like this possible to publish with TMG?

Apache suddenly very slow on http and faster on https

Posted: 15 Jun 2022 04:01 PM PDT

Background: I have Apache 2 running on ubuntu. There is a low usage on it and mostly being accessed for a web service URL from mobile apps. It was working fine until I installed SSL certificates. I now have both http and https. When I access the server using https, I get a fairly quick response (but probably not as fast as before). When I use http, it's so slow.

What I tried: From this post:

  • I curl localhost from the host and it takes some time, meaning there is no routing issue.
  • The server runs on Amazon EC2 instance and is managed by me only.

Also:

  • I see that Apache once running, creates the maximum number of processes it is allowed to, which was not the case before. I lowered the MaxClients to 20 and I think I'm getting faster responses but it still takes over a minute and I always have MaxClients Apache processes.
  • dmesg returns many [ 1953.655703] TCP: Possible SYN flooding on port 80. Sending cookies.
  • When I netstat I get many entries with SYN_RECV. Possibly a DDoS attack?
  • From EC2's monitoring diagrams I see a pattern of high "Maximum Network In (Bytes)" since 2 days ago. By the way the server is still being tested, the actual traffic is very low and not consistent.
  • I tried to go with this solution to limit incoming connections using iptables, still no luck, but I'm trying.

Question: What could be the problem? Is this a DDoS attack?

Update: I discussed this problem here. It was indeed a DDoS attack; some solutions are discussed too.

Windows Installer GPO - "Prohibit user installs" value is missing on Windows 7

Posted: 15 Jun 2022 04:01 PM PDT

The old Windows XP GPO template had three options for "Windows Installer\Prohibit user installs":

  1. Allow user installs

  2. Hide user installs

  3. Prohibit user installs

The new Windows 7 template has only the first two options.

Is this an error or the third option "Prohibit user installs" has been abolished on Windows 7?

Thank you

ASP.NET Error "The specified network password is not correct"

Posted: 15 Jun 2022 06:06 PM PDT

I have a .NET application running on server 2008 running IIS 7.5 that has two sites bound to the same IP address. A customer has moved the application root for one of the sites by accident to a network drive and now I am getting the error "The specified network password is not correct" even now that I've changed the directory back to the correct one and verified that the security permissions are correct. The other site is running fine, but I can't work out what I need to do to get this one working.

Hopefully someone can help me with this!

Thanks

nginx not interpreting some PHP files

Posted: 15 Jun 2022 01:00 PM PDT

I am trying to port a website on php / mysql / apache on one server to another server.

The other server was a raw linux box on amazon ec2. I am not a advanced linux admin. I have some basic knowledge tough.

So i installed nginx and set doc root to /home/webroot/ . I setup php cgi server and started it on port 9000. Installed mysql and imported the dump. Created a FTP User called webroot with permissions to /home/webroot/ and added it to vsftpd to upload all the files to /home/webroot/.

Perfect.

I access subdomain.site.com and it resolves, but it returns a blank page. It is not interpreting index.php and other uploaded php files. How ever it is interpreting some of the new files i created. It is either shows blank page or DUMPS ENTIRE PHP CODE TO THE BROWSER.

I create an example test.php and populate it with some code to read request variables and print it. I access it from subdomain.site.com/test.php?id=2 and it does interpret it. It is also connecting to the database if i invoke it from the command line.

Why is it working with some files while not with some others. Is it a php error or a nginx error. Any ideas. Here is nginx.conf -

    server {  listen       80;      server_name  sub.domain.com;      #charset koi8-r;      #access_log  logs/host.access.log  main;        location / {          root   /home/webroot;          index  index.html index.htm index.php;      }    error_page  404              /404.html;      location = /404.html {          root   /usr/share/nginx/html;      }    # redirect server error pages to the static page /50x.html      #  error_page   500 502 503 504  /50x.html;      location = /50x.html {          root   /usr/share/nginx/html;      }    # proxy the PHP scripts to Apache listening on 127.0.0.1:80      #  #location ~ \.php$ {      #    proxy_pass   http://127.0.0.1;      #}    # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000      #  location ~ \.php$ {          fastcgi_pass   127.0.0.1:9000;          fastcgi_index  index.php;          fastcgi_param  SCRIPT_FILENAME  /home/webroot$fastcgi_script_name;          include        /etc/nginx/fastcgi_params;      }        # deny access to .htaccess files, if Apache's document root      # concurs with nginx's one      #  #location ~ /\.ht {      #    deny  all;      #}  }    # Load config files from the /etc/nginx/conf.d directory  include /etc/nginx/conf.d/*.conf;  

}

any ideas ?

What naming convention do Windows DHCP logs use when they overflow?

Posted: 15 Jun 2022 06:06 PM PDT

We have some issues with DNS and DHCP interacting, so we're archiving the DHCP event logs using PowerShell and dumping the contents to an SQL database for later analysis. However, the archive script is only grabbing files named 'DhcpSrvLog-$day.log' (where $day is Mon, Tue, etc.). However, the log files are limited to 10MB in size. After that point, they overflow, but we can't find any documentation on what the next log file is called. It doesn't overflow often, but on the days that it does the first file often rolls over or overflows at 3pm.

I'd like to be able to archive the complete log files on the days the system is logging so much, and I'm assuming Windows doesn't just stop writing the logs. Is there any way to either increase the log size (I believe this is fixed) or can someone tell me the convention for the second log of a given day?

OS is Windows Server 2008 R2 x64.

[For reference, the issues we're looking for themselves are likely related to the system being upgraded continually since the mid 90's. It was originally a Novell domain, then NT 3.51 --> NT 4 --> 2k --> 2k3 --> 2k8 --> 2k8 R2... plus several custom schema extensions which don't work well at all and a history of administrators who had no idea what they were doing. The domain has more issues than Time magazine, so this problem doesn't surprise anyone. We're in the process of moving to a brand new domain, but we're trying to troubleshoot this DHCP/DNS issue to make sure the same problem doesn't happen on the new domain, too.]

No comments:

Post a Comment