Saturday, August 28, 2021

Recent Questions - Server Fault

Recent Questions - Server Fault


Why can't I ping a IPv6 enabled device?

Posted: 28 Aug 2021 09:13 PM PDT

I have two devices, each connected to a mobile hotspot from different network providers. Going to myipaddress.com shows me IPv6 addresses from these devices. So I assume they can be accessed through the internet. However, when trying to ping these devices from my computer connected to a third network provider, I am able to ping the first device successfully, however pinging the second device returned a "request timed out". My question is, why does the ping only work on one network provider and not the other even though they both assign IPv6 addresses to these devices? Does one of the network provider restrict access through a firewall? If so, is there a way to get around it? Would appreciate a response. Thanks guys

Btrfs on multiple disks with "archiving on slowest drive" balancing strategy

Posted: 28 Aug 2021 07:58 PM PDT

I was wondering if the following setup is possible using Btrfs.

I have a laptop which has (and I guess this is going to be more frequent) a 500GB SSD and a 2TB HDD. I would like to mainly use the SSD to benefit from its fast performances. When running low on space on that device, I would like to use the HDD as a storage for files that are used less frequently and/or that are getting older (archiving).

Currently, although I have Btrfs on both disks, I mounted them on two different mount points (e.g. SSD subvolumes on /, /home, ... and HDD on /srv/attic) and when I'm running low on disk space in my working space, I manually select and move the files I need the less to /srv/attic.

I was wondering if, with the right combination of multi-device profile (single? raid0?) and btrfs balance with some filters (usage?) and some scripting, it would be possible to achieve the same idea transparently, so that from a user point of view they are using a 2.5TB filesystem (or 2.0TB filesystem if SSD has to be mirrored, it doesn't matter much), with recently used files (blocks) being efficient, and older ones being (probably not so noticeably) slower.

The icing on the cake would be that it is subvolume-aware, so that system files are left on the efficient drive, while user files can be balanced.

Deny unencrypted s3 buckets via SCP

Posted: 28 Aug 2021 07:19 PM PDT

Folks just wondering if there's ability to attach a SCP to OU accounts denying S3 buckets from being created if default encryption is not opted upfront.

From cloud trail it's apparent that PutBucketEncryption and CreateBucket are not in the same transaction.

Also CreateBucket doesn't take in encryption via headers in it's Api call.

So adding a condition like below might not yield at all.

''' "StringNotEquals": { "s3:x-amz-server-side-encryption": "AES256" } '''

Any leads guys? Appreciate your responses. Cheers!

Wordpress asking for FTP when deleting plugins

Posted: 28 Aug 2021 06:41 PM PDT

Wordpress asks me for FTP credentials when I try to delete or install plugins. I know it has to do with permissions but I have been unable to figure it out. I have a linux system user XYZ and apache2 run as www-data. This works:

sudo chown www-data:www-data -R /path/to/wordpress  sudo chmod 700 -R /path/to/wordpress  

But it's unsafe. My initially planned configuration was:

sudo chown XYZ:www-data -R /path/to/wordpress  sudo chmmod 750 -R /path/to/wordpress  sudo chmod 770 -R /path/to/wordpress/wp-content  

According to the wordpress docs, wp-content is the only folder which the webserver should have write-access to. It comprises the plugins and themes folders.

But it doesn't work. I've spent several hours researching online but nothing has helped so far and I don't know what to try anymore. What are the right permissions to allow automated updates and plugin installation, without giving the webserver write-access to everything?

edit: For whatever reason, the following does not work:

sudo chown XYZ:www-data -R /path/to/wordpress  sudo chmod 770 -R /path/to/wordpress  

I thought it to be identical to the first variant above, giving www-data write-access to everything. But it doesn't do the trick.

JBoss AS v6.1: managing applications

Posted: 28 Aug 2021 04:09 PM PDT

On our installation we typically restart the entire JBoss AS every time we need to restart one of the WARs that are deployed (in expanded form) under ..../deploy folder.

I thought I'd be able to individually start/stop/update WARs using the admin console and the CLI, and without stopping the entire AS, but just today I discovered that (a) our WARs are seen as "embedded" and cannot be managed via the console, and (b) there is no CLI for AS, only for EAP.

Can I, perhaps, do the same thing by "brute force", i.e. removing the expanded WAR from the folder (to effectively undeploy), or replacing it with the new version (to effectively update), while the server runs?

Please advise.

PS. Yes, I know, 6.1 is very, very old, it's a long story.

Is there a simple way to turn a Linux machine/cluster into an object based storage device?

Posted: 28 Aug 2021 03:38 PM PDT

I was wondering if there was like a software I can install on a Linux machine/cluster that could somehow "replace" the existing file system with object based storage. Something I can also run CRUD operations on.

Is there an on-prem software I can install without purchasing an S3 AWS cloud or something?

UFW OpenVPN issue on Ubuntu 20.04

Posted: 28 Aug 2021 04:43 PM PDT

I've got a curious OpenVPN / UFW issue on Ubuntu 20.04.

I have a rule set to allow outgoing traffic over tun0: ufw insert 1 allow out on tun0 from any to any. The UFW defaults are set to deny, both in & out: ufw default deny outgoing & ufw default deny incoming.

I'm only able to route traffic through tun0 with UFW running, if I go through the following strange dance each and every time I want to connect to the VPN:

  1. ufw disable (disable UFW, as you'd expect, to allow VPN to connect to server)
  2. Connect to VPN (connection successfully establishes)
  3. ufw enable (re-enable UFW) - So far, as expected - now I'd expect traffic to be sent out via tun0 without any issues ... but no. I now have to do the following...
  4. Add a rule to allow all outgoing connections through any interface: ufw insert 1 allow out from any to any
  5. Establish a connection anywhere - e.g. ping 1.1.1.1. This is the vital step - without which subsequent connections through tun0 fail
  6. Delete the rule I just added that allows all outgoing connections through any interface (since that is clearly not what we want - the intention is to limit connections to tun0 as per the existing rule): ufw delete 1

Now, I am able to establish connections through the VPN tunnel, as expected. However without steps 4 & 5, all connections are blocked by UFW; I am unable to connect through tun0 - even though there is an explict UFW rule set to allow it.

Here is my UFW user.rules file (I have an SSH rule too):

*filter  :ufw-user-input - [0:0]  :ufw-user-output - [0:0]  :ufw-user-forward - [0:0]  :ufw-before-logging-input - [0:0]  :ufw-before-logging-output - [0:0]  :ufw-before-logging-forward - [0:0]  :ufw-user-logging-input - [0:0]  :ufw-user-logging-output - [0:0]  :ufw-user-logging-forward - [0:0]  :ufw-after-logging-input - [0:0]  :ufw-after-logging-output - [0:0]  :ufw-after-logging-forward - [0:0]  :ufw-logging-deny - [0:0]  :ufw-logging-allow - [0:0]  :ufw-user-limit - [0:0]  :ufw-user-limit-accept - [0:0]  ### RULES ###    ### tuple ### allow any 22 0.0.0.0/0 any 192.168.0.0/16 in  -A ufw-user-input -p tcp --dport 22 -s 192.168.0.0/16 -j ACCEPT  -A ufw-user-input -p udp --dport 22 -s 192.168.0.0/16 -j ACCEPT    ### tuple ### allow any any 0.0.0.0/0 any 0.0.0.0/0 out_tun0  -A ufw-user-output -o tun0 -j ACCEPT    ### tuple ### deny any any 0.0.0.0/0 any 0.0.0.0/0 out  -A ufw-user-output -j DROP    ### tuple ### deny any any 0.0.0.0/0 any 0.0.0.0/0 in  -A ufw-user-input -j DROP    ### END RULES ###    ### LOGGING ###  -A ufw-after-logging-input -j LOG --log-prefix "[UFW BLOCK] " -m limit --limit 3/min --limit-burst 10  -A ufw-after-logging-output -j LOG --log-prefix "[UFW BLOCK] " -m limit --limit 3/min --limit-burst 10  -A ufw-after-logging-forward -j LOG --log-prefix "[UFW BLOCK] " -m limit --limit 3/min --limit-burst 10  -I ufw-logging-deny -m conntrack --ctstate INVALID -j RETURN -m limit --limit 3/min --limit-burst 10  -A ufw-logging-deny -j LOG --log-prefix "[UFW BLOCK] " -m limit --limit 3/min --limit-burst 10  -A ufw-logging-allow -j LOG --log-prefix "[UFW ALLOW] " -m limit --limit 3/min --limit-burst 10  ### END LOGGING ###    ### RATE LIMITING ###  -A ufw-user-limit -m limit --limit 3/minute -j LOG --log-prefix "[UFW LIMIT BLOCK] "  -A ufw-user-limit -j REJECT  -A ufw-user-limit-accept -j ACCEPT  ### END RATE LIMITING ###  COMMIT  

Any ideas why this bizarre behaviour is occurring?

How do I change MySQL wait_timeout in production environment at runtime?

Posted: 28 Aug 2021 09:39 PM PDT

I'm running Windows, IIS, MySQL, PHP.

In my.ini under [mysqld] the value for wait_timeout is set to 60.

wait_timeout = 60  

But when I execute the following:

show variables like 'wait_timeout';  

It shows me that the value is 28800, which I know is the default.

So I tried to set the value by executing the following:

SET GLOBAL wait_timeout = 60;  

But this doesn't seem to work. MySql Workbench tells me "0 rows(s) affected" and when I execute show variables like 'wait_timeout' it still tells me that the value is 28800.

I've also checked interactive_timeout and the story is the same. The value is 28800 and I can't change it.

What am I missing here?

Ping host with dual IPs on 1 IP, echo returns from other IP

Posted: 28 Aug 2021 07:54 PM PDT

I'm running Fedora 33 on a host (i5 cpu, 8Gb RAM, SSD and hdd) which is set up as a router; it has 5 NICs. I've managed to get dual internet gateways and dual LANs working reasonably well using nftables.

One gateway is DSL with pppoe, the other a cable modem. Both connect and can see the internet. Both LANs can see the internet and provide services which are seen by the internet. IOW, NAT and forwarding are working well.

Here is the problem: I can't figure out how to set up the routing tables. What's going wrong is that whichever gateway has the lowest metric works with NAT and forwarding to its LAN, but it shuts off NAT and forwarding to the other gateway and LAN. I have everything working on only one gateway at a time from the LAN machines' perspective.

root@gata[~]# route -n  Kernel IP routing table  Destination     Gateway         Genmask         Flags Metric Ref    Use Iface  0.0.0.0         67.193.x.x      0.0.0.0         UG    100    0        0 coglink  0.0.0.0         206.248.x.x     0.0.0.0         UG    104    0        0 ppp0  10.0.0.0        0.0.0.0         255.0.0.0       U     103    0        0 tekgw  67.193.56.0     0.0.0.0         255.255.248.0   U     100    0        0 coglink  192.168.1.0     0.0.0.0         255.255.255.0   U     102    0        0 coggw  206.248.155.132 0.0.0.0         255.255.255.255 UH    105    0        0 ppp0  

I know it's possible to set up routes so that machines on 10.0.0.0 always use ppp0, and machines on 192.168.1.0 always use coglink, but web searches on how to do it have been fruitless. Same with the internet facing interfaces. If someone can point me to a lucid relevant tutorial on IP routing for multiple interfaces, I'd be very grateful.

How to create an SSL certificate for an AWS application load balancer without a domain

Posted: 28 Aug 2021 09:04 PM PDT

I am trying to create a Cloudformation stack that can be provisioned by anybody (basically I want to share it either in the marketplace, or make it public in GitHub), which includes a set of EC2 instances behind an ALB (no autoscaling, but rather a fixed number of instances).
I want to create a single listener for the ALB listening on a non default port (let's say 9999) that uses HTTPS. In order to do this, ALB forces me to use an SSL certificate. I only care about the encryption, and not about the CA validation (because this is meant for internal traffic.) What I would like to do, is to have encryption enabled between a client and a load balancer like: https://my-loadbalancer-1234567890.us-west-2.elb.amazonaws.com:9999. This is a rest api, so I don't care about the browser pop up complaining about "Your connection is not private"

I can't rely on having a domain, since I want to share this template, I don't expect everybody to own a domain. I can think of 3 solutions, but I don't like any of these (and I don't even know if they will work):

  1. Generate a self signed cert on the userdata script. Push this cert to ACM. Then use this cert from the ALB.
    Downside: This will probably require to remove manually the cert, if the stack is destroyed, as the certificate was not created from cloudformation, but from ec2 bootstrap.

  2. Generate a self signed cert on the userdata script, but instead of pushing to ACM, install it on an ec2 alb (using something like haproxy/nginx).
    Downside: We don't get the benefits of aws alb.

  3. Have the end user to create a subdomain (myrestapi.example-domain.com) beforehand, and generate a cert with that domain from the cloudformation stack.
    Downside: requires extra step from the user, plus touching their existing infrastructure.

Can you modify the OWA redirection page in a hybrid O365 environment?

Posted: 28 Aug 2021 06:05 PM PDT

In a hybrid Exchange environment, if you migrate a user mailbox and then they attempt to access the mailbox using the on-prem OWA (Outlook Web App) link, they will be presented with a page that instructs them to click another link to reach their mailbox and offers a button to create a favorite to the new OWA on O365. Can this redirection page be modified to change the wording, add branding or remove the button to add a favorite?

File writing issue with mounted ftp drive with curlftpfs

Posted: 28 Aug 2021 03:03 PM PDT

I have mounted an ftp account to my linux folder using below command

curlftpfs -o user=userid:password ip-address /home/temp -o kernel_cache,allow_other,direct_io,umask=0000,uid=1000,gid=1000  

The problem i am having is whenever I am trying to save data to any file on this mounted folder i.e. any text file it gives "Input/output error , unable to flush data " , afterwards the file is created in folder but data is not written to the file

Is there anything i am missing with the command? I am using below curlftpfs version

curlftpfs 0.9.2 libcurl/7.29.0 fuse/2.9

I also found link below which shows some patch but there seems no documentation on how / where to apply it , any idea how to apply this patch?

https://bugzilla.redhat.com/show_bug.cgi?id=671204

how to require publickey and otp, or password and otp when logging in with ssh?

Posted: 28 Aug 2021 09:04 PM PDT

I'm trying to get ssh to work in a way where password auth can be skipped with a key, and in addition every login would be followed up with totp using google's libpam on my new debian 9 installation.

So far i've been able to get the first part working, where if i provide a key, the server asks me for the otp, but the way it is, i've had to comment out both @include common-auth and @include common-password to suppress the password prompt in /etc/pam.d/sshd.

Seems obvious then that if i do AuthenticationMethods publickey,keyboard-interactive:pam password,keyboard-interactive:pam in my sshd_config and i try logging in without a key it does not matter what password i provide since the password checking parts are commented out.

The logical way to solve this as it would seem to a novice like me, would be that i could define different pam methods or classes, and then somehow reference those in my sshd_config, but i cant seem to find any information regarding such an operation.

Is it even possible to accomplish this particular combo?

edit 1:

Tinkering further with this, it really does not make as much sense as i initially thought. If i comment out both @include common-auth and @include common-password, i can get publickey,keyboard-interactive:pam to not ask for password. If i now set AuthenticationMethods password for a specific user, that user is not able to log in at all due to every password being rejected, even if it really is the valid one. So logically then it seems sshd password auth method also uses the /etc/pam.d/sshd configs. if i dont comment those includes, keyboard-interactive:pam asks for password and verification code, but password auth method still fails for any user that has otp initialized (and would fail for all except i give google libpam the nullok option). Seems like password is just a crappy version of keyboard-interactive:pam that can only prompt for one input and thus always fails if there are more then one required inputs.

If i write my own pam.d module, is there any way to make ssh use it instead of /etc/pam.d/sshd?

edit 2:

Im starting to think that i cant do (password && otp) || (publickey && otp), because the public key is checked in a different place from the rest, and so unless i can define which pam config to use with AuthenticationMethods, or i can somehow send parameters/arguments to the pam module, knowing when to check both and when to only check otp seems impossible

Postfix allow incoming mail for specified domain from specified ips

Posted: 28 Aug 2021 05:03 PM PDT

I am running a Postfix mail server. Some domains are configured that the DNS MX record is set to an antispam service. This service is forwarding the good mails to our mailserver. Some sender are ignoring the MX entry so they send the mails (most spam) directly to the postfix server.

So I tested some configuration changes on the Postfix server, that when an emails goes to the specified domains to check which ip sends this email. If it is an ip from the antispam service to accept the mail, all other ips reject the mail.

As reference I took these two sites to configure the postfix mailserver: Postfix Limit mail for domain from IP range and http://www.postfix.org/RESTRICTION_CLASS_README.html

When I test my configuration, i see that the part with the domains is working. But my problem is, all incoming mails for the specified domains are rejected, no email will be accepted, although the ip i send with is allowed.

So here is my Postfix configuration.

main.cf

smtpd_restriction_classes = antispam  antispam = check_sender_access texthash:/etc/postfix/allowed_ips, reject    smtpd_recipient_restrictions =  [... other restrictions ...]  check_recipient_access texthash:/etc/postfix/protected_domains,  permit  

allowed_ips

 192.0.2.0/24 PERMIT   198.51.100.4/32 PERMIT   0.0.0.0/0 REJECT  

protected_domains

 domain.example antispam   domain2.example antispam  

Nginx configuration with HAproxy proxy protocol and internal redirection

Posted: 28 Aug 2021 05:03 PM PDT

I need to redirect HTTPS stream from HAProxy to Nginx without SSL termination and without loosing an info about the original client IP. Unfortunately I cannot change the configuration of default 443 site on Nginx because it's maintained by Synology NAS configuration.

I was thinking about new listen port on Nginx accepting proxy protocol from HAProxy and kind of internal redirection to the local 443 port without SSL decoding / encoding, but with passing the original client IP taken from HAProxy. Is that somehow possible?

Edit: The background is that I have tunneled OpenVPN and web services on the same external 443 port, so actually it looks as below:

router 443 TCP  ->  HAProxy -> SNI check -> stunnel -> OpenVPN                                    |                                    ------> SSL termination -> Nginx 443 HTTPS  

I use HAProxy because ngx_stream_ssl_preread_module is not available on Synology's builtin Nginx.

Edit: I think the situation and question can be more generic:

Nginx:  port X accessed via proxy protocol with SSL/TLS  port Y  

How to pass the stream from port X to Y with the information about the source client IP and without the SSL termination? Is listen directive with proxy_protocol on port Y the only possible option?

How to Easily Pass an Environment Variable to an .exe in "Bash on Windows 10"?

Posted: 28 Aug 2021 02:01 PM PDT

I am looking for a way to easily pass an environment variable to a .exe when invoked from the Bash on Windows 10 terminal. It seems that

TEST=somevalue example.exe  

does not work.

Apache: Redirect everything from www to non-www using https only (including HSTS)

Posted: 28 Aug 2021 07:02 PM PDT

My goal

Everything results in https://mydomain.tld (non-www and with TLS) and HSTS works correct. I am using the certificates from LE (Let's Encrypt) and so I used their wizard to make my website HTTPS everywhere. But it doesn't seem to work correct.

My current problems

  1. Visiting http://mydomain.tld (non-www, non-tls). The result is the Apache-status-page, but I already have a website running with content. Reloading the page results in https://mydomain.tld with website-content. But it should do that from the first vist on and not only after reloading the page.
  2. Visiting http://www.mydomain.tld results in https://www.mydomain.tld which is okay from TLS-view, but it doesn't redirect to non-www, which is my goal.
  3. Visiting https://www.mydomain.tld resuslts in https://www.mydomain.tld. No redirection to non-www.
  4. No problem: Visiting https://mydomain.tld results in the same URL, which is what I want.

DNS-settings:

A-RECORDS  .mydomain.tld -> 111.222.333.444  *.mydomain.tld -> 111.222.333.444  www.mydomain.tld -> 111.222.333.444  

mydomain.tld.conf

<VirtualHost *:80>    ServerName mydomain.tld  ServerAlias www.mydomain.tld  ServerAdmin contact@mydomain.tld  DocumentRoot /var/www/mydomain.tld/public_html  Redirect permanent / https://mydomain.tld/    <Directory /var/www/mydomain.tld/public_html>  Options FollowSymLinks  AllowOverride all  Require all granted  </Directory>    ErrorLog ${APACHE_LOG_DIR}/error.log  CustomLog ${APACHE_LOG_DIR}/access.log combined    RewriteEngine On  RewriteCond %{HTTPS} off  RewriteRule (.*) https://%{SERVER_NAME}/%$1 [R,L]  </VirtualHost>  

mydomain.tld-le-ssl.conf

<IfModule mod_ssl.c>  <VirtualHost *:443>          ServerName www.mydomain.tld          ServerAlias mydomain.tld          ServerAdmin contact@mydomain.tld          DocumentRoot /var/www/mydomain.tld/public_html            <Directory /var/www/mydomain.tld/public_html>          Options FollowSymLinks          AllowOverride all          Require all granted          </Directory>            ErrorLog ${APACHE_LOG_DIR}/error.log          CustomLog ${APACHE_LOG_DIR}/access.log combined    Header always set Strict-Transport-Security "max-age=63072000; includeSubdomains; preload"    RewriteEngine on  SSLCertificateFile /etc/letsencrypt/live/mydomain.tld/fullchain.pem  SSLCertificateKeyFile /etc/letsencrypt/live/mydomain.tld/privkey.pem  Include /etc/letsencrypt/options-ssl-apache.conf    </VirtualHost>  </IfModule>  

As you can see above in the mydomain.tld-le-ssl.conf another file is included, which might doesn't make problems, but just for the records:

options-ssl-apache.conf

# Baseline setting to Include for SSL sites    SSLEngine on    # Intermediate configuration, tweak to your needs  SSLProtocol             all -SSLv2 -SSLv3  SSLCipherSuite          ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA$  SSLHonorCipherOrder     on  SSLCompression          off    SSLOptions +StrictRequire    # Add vhost name to log entries:  LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-agent}i\"" vhost_combined  LogFormat "%v %h %l %u %t \"%r\" %>s %b" vhost_common    #CustomLog /var/log/apache2/access.log vhost_combined  #LogLevel warn  #ErrorLog /var/log/apache2/error.log    # Always ensure Cookies have "Secure" set (JAH 2012/1)  #Header edit Set-Cookie (?i)^(.*)(;\s*secure)??((\s*;)?(.*)) "$1; Secure$3$4"  

Bonus-problem

I have a .htaccess-file in my domain-root which makes the links look better:

  • without: https://mydomain.tld/index.php?page=news
  • with: https://mydomain.tld/news

.htaccess

RewriteEngine On    RewriteCond %{REQUEST_FILENAME} !-f  RewriteCond %{REQUEST_FILENAME} !-d  RewriteRule ^\w+$ index.php?page=$0 [L]  RewriteCond %{THE_REQUEST} index\.php  RewriteCond %{QUERY_STRING} ^page=(\w+)$  RewriteRule ^index\.php$ /%1? [R=301,L]  

I would like to live without the .htaccess-file and add the stuff to the .conf-file(s) if possible, but everything I did, didn't work yet.

Cisco SF500-24P Image Upgrade to 14088 fails

Posted: 28 Aug 2021 08:06 PM PDT

I try to upgrade the system to 1.4.0.88 and it fails with error

Status: Copy failed Error Message: Copy: SW code file is over sized

I am using HTTP for the upgrade. Any suggestions?

Possibility to know who created an instance on Azure

Posted: 28 Aug 2021 08:06 PM PDT

Is it possible to know via portal or powershell which of the admins created new instances on azure or at least get alerts when something new is created?

Why isn't the arc_max setting honoured on ZFS on Linux?

Posted: 28 Aug 2021 02:48 PM PDT

I'm running ZoL 0.6.2 from their PPA on Ubuntu 12.04. It's on a host with 16GB of memory intended to run some VMs using KVM/Libvirt. After some time ZoL is using an insane amount of memory, reaching 98% of RAM usage with some VMs running. This results new processes refusing to start "unable to allocate memory". I can't even start all my VMs anymore which before using ZFS were using about 40-50% of RAM.

As far as I understand, without tweaking, ZoL should release memory as soon as the system is short on memory. Well, it doesn't. So I decided to set the arc_max setting to 1GB.

# echo 1073741824 >> /sys/module/zfs/parameters/zfs_arc_max  

Still, it does not release any memory.

As you can see from the below ARC statistics, it's using more memory than it's configured to (compare c=7572030912 with c_max=1073741824).

What am I doing wrong here?

# cat /proc/spl/kstat/zfs/arcstats  4 1 0x01 84 4032 43757119584 392054268420115  name                            type data  hits                            4    28057644  misses                          4    13975282  demand_data_hits                4    19632274  demand_data_misses              4    571809  demand_metadata_hits            4    6333604  demand_metadata_misses          4    289110  prefetch_data_hits              4    1903379  prefetch_data_misses            4    12884520  prefetch_metadata_hits          4    188387  prefetch_metadata_misses        4    229843  mru_hits                        4    15390332  mru_ghost_hits                  4    1088944  mfu_hits                        4    10586761  mfu_ghost_hits                  4    169152  deleted                         4    35432344  recycle_miss                    4    701686  mutex_miss                      4    35304  evict_skip                      4    60416647  evict_l2_cached                 4    0  evict_l2_eligible               4    3022396862976  evict_l2_ineligible             4    1602907651584  hash_elements                   4    212777  hash_elements_max               4    256438  hash_collisions                 4    17163377  hash_chains                     4    51485  hash_chain_max                  4    10  p                               4    1527347963  c                               4    7572030912  c_min                           4    1038188800  c_max                           4    1073741824  size                            4    7572198224  hdr_size                        4    66873056  data_size                       4    7496095744  other_size                      4    9229424  anon_size                       4    169150464  anon_evict_data                 4    0  anon_evict_metadata             4    0  mru_size                        4    1358216192  mru_evict_data                  4    1352400896  mru_evict_metadata              4    508928  mru_ghost_size                  4    6305992192  mru_ghost_evict_data            4    4919159808  mru_ghost_evict_metadata        4    1386832384  mfu_size                        4    5968729088  mfu_evict_data                  4    5627991552  mfu_evict_metadata              4    336846336  mfu_ghost_size                  4    1330455552  mfu_ghost_evict_data            4    1287782400  mfu_ghost_evict_metadata        4    42673152  l2_hits                         4    0  l2_misses                       4    0  l2_feeds                        4    0  l2_rw_clash                     4    0  l2_read_bytes                   4    0  l2_write_bytes                  4    0  l2_writes_sent                  4    0  l2_writes_done                  4    0  l2_writes_error                 4    0  l2_writes_hdr_miss              4    0  l2_evict_lock_retry             4    0  l2_evict_reading                4    0  l2_free_on_write                4    0  l2_abort_lowmem                 4    0  l2_cksum_bad                    4    0  l2_io_error                     4    0  l2_size                         4    0  l2_asize                        4    0  l2_hdr_size                     4    0  l2_compress_successes           4    0  l2_compress_zeros               4    0  l2_compress_failures            4    0  memory_throttle_count           4    0  duplicate_buffers               4    0  duplicate_buffers_size          4    0  duplicate_reads                 4    0  memory_direct_count             4    66583  memory_indirect_count           4    7657293  arc_no_grow                     4    0  arc_tempreserve                 4    0  arc_loaned_bytes                4    0  arc_prune                       4    0  arc_meta_used                   4    427048272  arc_meta_limit                  4    2076377600  arc_meta_max                    4    498721632    # free -m               total       used       free     shared    buffers     cached  Mem:         15841      15385        456          0         75         74  -/+ buffers/cache:      15235        606  Swap:            0          0          0  

Enabling DSA key authentification for SFTP while still keeping password login as optional (Ubuntu 12.04)

Posted: 28 Aug 2021 06:05 PM PDT

I have a server running Ubuntu 12.04 Server. I want to be able to use SFTP on the command line with a DSA key, so I don't have to type the password into the terminal. Is this possible to do on the same server... i.e I want to SFTP to localhost (to test some PHP code before running it live). But I still want to allow password login by other clients if they want to. I don't want the certificate to be forced, but I don't want it to ask for the password if the certificate is passed or whatever.

I have the following options enabled in ssh_config:

RSAAuthentication yes  PasswordAuthentication yes  PubkeyAuthentication yes  IdentityFile ~/.ssh/id_dsa  

The following files with shown permissions are in /root/.ssh/

-rw-r--r--  1 root root  668 Apr 10 11:06 authorized_keys  -rw-------  1 root root  668 Apr 10 11:03 id_dsa  -rw-r--r--  1 root root  608 Apr 10 11:03 id_dsa.pub  

I copied the key into authorized keys with:

cat /root/.ssh/id_dsa.pub >> /root/.ssh/authorized_keys  

And when I cat authorized keys, it has added the key.

So, when I try to connect to sftp with sftp -v root@testserver (just locally, again, for testing some code but that's irrelevant), I still get the password prompt. Here's a section of the verbose output:

debug1: Authentications that can continue: publickey,password  debug1: Next authentication method: publickey  debug1: Trying private key: /root/.ssh/id_rsa  debug1: Offering DSA public key: /root/.ssh/id_dsa  debug1: Authentications that can continue: publickey,password  debug1: Trying private key: /root/.ssh/id_ecdsa  debug1: Next authentication method: password  root@testserver's password:  

Have I missed something obvious? Or will it not work connecting locally?

Thanks

Zabbix agent - high CPU usage

Posted: 28 Aug 2021 07:02 PM PDT

I am monitoring a host with the help of Zabbix and I noticed that Zabbix agent started using quite a lot of CPU cycles:

PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND                                                                                                                                           26774 zabbix    20   0 68428 1312  752 R   99  0.0  63:27.67 /usr/sbin/zabbix_agentd                                                                                                                           26773 zabbix    20   0 68428 1324  764 R   99  0.0  63:26.33 /usr/sbin/zabbix_agentd  

There are about 100 items monitored with the agent. They are also monitored on other identical hosts where Zabbix agent does not consume so much of CPU. Agents send collected data to Zabbix proxy. The agent configuration is default. The host CPU has 8 cores (2.4 Gz). The smallest time value for monitored items is 60 seconds.

I use Zabbix server / agent 1.8.11 and I can't upgrade to 2.2 at least now.

I checked debug log from all sides: Zabbix server, proxy, agent and can't find any issues there. Just usual checks received and sent all of the time.

I don't know how to investigate this issue further and asking for community help. How could I trace why agent is consuming CPU so hard?

One more thing that is looking strange for me is stats of the network connections:

netstat -an|awk '/tcp/ {print $6}'|sort|uniq -c        2 CLOSE_WAIT       21 CLOSING     3521 ESTABLISHED     2615 FIN_WAIT1      671 FIN_WAIT2     1542 LAST_ACK       14 LISTEN      256 SYN_RECV   117841 TIME_WAIT  

Thank you.

Update 1.

netstat -tnp|grep zabbix        tcp        1      0 10.120.0.3:10050        10.128.0.15:53372        CLOSE_WAIT  23777/zabbix_agentd      tcp        1      0 10.120.0.3:10050        10.128.0.15:53970        CLOSE_WAIT  23775/zabbix_agentd      tcp        1      0 10.120.0.3:10050        10.128.0.15:53111        CLOSE_WAIT  23776/zabbix_agentd  

10.128.0.15 - IP of Zabbix server 10.120.0.3 - IP of Zabbix host

Update 2.

Those TIME_WAIT connections are from web server nginx.

Update 3.

I attached to the Zabbix agent process with strace and it appeared that 100% is used by agents on the read syscall:

strace -C -f -p 23776    Process 23776 detached  % time     seconds  usecs/call     calls    errors syscall  ------ ----------- ----------- --------- --------- ----------------  100.00    2.175528        2515       865           read  ------ ----------- ----------- --------- --------- ----------------  100.00    2.175528                   865           total  

Update 4.

Just to get all things clear... I tried to work with the TIME_WAIT connections state. For example, I tried decreasing net.ipv4.netfilter.ip_conntrack_tcp_timeout_time_wait and net.netfilter.nf_conntrack_tcp_timeout_time_wait and see if it helps. Unfortunately, it did not help.

Conclusion

The Zabbix agent CPU load issue appeared to be bound with the network connections number. If we attach to the zabbix_agentd process using strace, we will see how CPU cycles are used (1-st column - CPU time spent running in the kernel):

% time     seconds  usecs/call     calls    errors syscall  ------ ----------- ----------- --------- --------- ----------------  100.00   15.252232        8646      1764           read    0.00    0.000000           0         3           write    0.00    0.000000           0         1           open  ...  ------ ----------- ----------- --------- --------- ----------------  100.00   15.252232                  1778           total  

Here most of the CPU time is used on the read system calls. Further investigation showed that these read calls (2 of them are shown below) are continious attempts to read the /proc/net/tcp file. The latter contains network statistic such as TCP and UDP connections, sockets, etc. In average the file contains 70000-150000 entries.

8048       0.000068 open("/proc/net/tcp", O_RDONLY) = 7 <0.000066>  8048       0.000117 fstat(7, {st_dev=makedev(0, 3), st_ino=4026531993, st_mode=S_IFREG|0444, st_nlink=1, st_uid=0, st_gid=0, st_blksize=1024, st_blocks=0, st_size=0, st_atime=2013/04/01-09:33:57, st_mtime=2013/04/01-09:33:57, st_ctime=2013/04/01-09:33:57}) = 0 <0.000012>  8048       0.000093 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f30a0d38000 <0.000033>  8048       0.000087 read(7, "  sl  local_address rem_address   st tx_queue rx_queue tr tm->when retrnsmt   uid  timeout inode    "..., 1024) = 1024 <0.000091>  8048       0.000170 read(7, "                         \n   6: 0300810A:0050 9275CE75:E67D 03 00000000:00000000 01:00000047 0000000"..., 1024) = 1024 <0.000063>  

Nagios Configuration Error

Posted: 28 Aug 2021 03:03 PM PDT

Running /usr/local/nagios/bin/nagios -v /usr/local/nagios/etc/nagios.cfg Does not give any error. The output is Things look okay - No serious problems were detected during the pre-flight check

However when i try to do some configurations thru the xi interface, the configuration verification fails.

Configuration submitted for processing...  Waiting for configuration verification.......  Configuration verification failed.  

Nagios is till able to monitor my services and hosts but i cant do any changes to the configuration using xi interface.

I took a look at the configuration Snapshots and saw that there was 40 over errors. I am very puzzled as to why it doesn't show when i run the sanity check (plus it is able to monitor those hosts that produced the error). Another thing to note : i am able to restart nagios. - doesn't this prove that there isnt any error? thus it could restart normally?

Allowing Domain Users to run winrm commands

Posted: 28 Aug 2021 04:01 PM PDT

Currently i have a AD/Kerberos Configured on one EC2 instance(Windows 2008 R2) and created couple of users. Each of the users has administrator privileges. When We login as a non-domain Administrator, i can successfully execute the winrm commands. But when i login as the domain User (who has administrator privileges), i cannot run the winrm commands:

C:\Users\domain-username>winrm get winrm/config/service/auth  WSManFault      Message = Access is denied.    Error number:  -2147024891 0x80070005  Access is denied.  

I check the Group Policy Editor for WinRM did not find anything relevant. I am not sure what i am missing.

How to formulate IP forwarding rule using iptables

Posted: 28 Aug 2021 09:39 PM PDT

I have two Systems A and B. A is a TCP Client and sends a message to TCP Server on B.

------------------                --------------------------      System A                        System B    192.168.0.5 wlan0               192.168.0.3 wlan0    127.0.0.1   lo                  127.0.0.1 lo    TCP Client    <------------>    TCP Server on 127.0.0.1  ------------------                ----------------------------  

The TCP Client sends message to 192.168.0.3.

This should be redirected to the local interface of B as the TCP Server is running on 127.0.0.1 at Port 8000 of System B.

Therefore, I wrote the following ip table rules, however my Server at B doesn't receive any messages. Oh btw these two systems are Ubuntu linux systems.

Here is what I did on System B:

#Enable IP Forwarding for NAT  echo "1" > /proc/sys/net/ipv4/ip_forward    #Flush all iptable chains and start afresh  sudo iptables -F    #Forward incoming packets on 192.168.0.3 at wlan0 interface to 127.0.0.1  sudo iptables -t nat -A PREROUTING -p tcp -i wlan0 -d 192.168.0.3 --dport 8000 -j DNAT --to 127.0.0.1:8000    #Explicitly allow incoming connections on port 8000   sudo iptables -A INPUT -i wlan0 -p tcp --dport 8000 -m state --state NEW,ESTABLISHED -j ACCEPT    #Explicitly allow outgoing messages from port 8000  sudo iptables -A OUTPUT -o wlan0 -p tcp --sport 8000 -m state --state ESTABLISHED -j ACCEPT  

Then I start the Server on B and send a message from TCP Client on A. I can see the packets on wireshark from 192.168.0.5 on wlan0 but they never get forwarded :(

Please help.

UPDATE:

After inputs from experts here, I have made a more realistic "NAT" scenario for applying the forwarding rules but I have still issues: I have explained this in my newer post: Iptables: Forwarding packets doesn't work

Restoring StaticFileModule in IIS

Posted: 28 Aug 2021 10:09 PM PDT

How do you restore the default handler mappings? I accidentally deleted the StaticFileModule in the DefaultWebSite and now I can't bring it back.

Additionally, Reverting to Parent doesn't bring it back.

How to understand /etc/mtab?

Posted: 28 Aug 2021 09:57 PM PDT

/dev/mapper/VolGroup00-LogVol00 / ext3 rw 0 0  proc /proc proc rw 0 0  sysfs /sys sysfs rw 0 0  devpts /dev/pts devpts rw,gid=5,mode=620 0 0  /dev/sda1 /boot ext3 rw 0 0  tmpfs /dev/shm tmpfs rw 0 0  none /proc/sys/fs/binfmt_misc binfmt_misc rw 0 0  sunrpc /var/lib/nfs/rpc_pipefs rpc_pipefs rw 0 0  

What does the 6 columns mean?

God Process Monitoring - CentOS - Event System Not Found

Posted: 28 Aug 2021 04:01 PM PDT

I have god installed on at least a dozen (or more) servers running CentOS 5.5 in both i386 and x86_64 flavors that work perfectly. I just setup two new CentOS 5.5 x86_64 servers and installed God, but I'm getting an event system error:

$ tail /var/log/god.log   E [2011-04-22 12:33:17] ERROR: Condition 'God::Conditions::ProcessExits'   requires an event system but none has been loaded     $ god check   using event system: none   [fail] event system did not load     $ uname -a   Linux server2.example.com 2.6.18-238.9.1.el5 #1 SMP Tue Apr 12 18:10:13 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux   

I can't find any cn or netlink kernel module on any of my CentOS servers. Yet I have other servers that work fine:

$ god check   using event system: netlink   starting event handler   forking off new process   forked process with pid = 17559   killing process   [ok] process exit event received     $ uname -a   Linux server1.example.com 2.6.18-194.el5 #1 SMP Fri Apr 2 14:58:14 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux   

All servers run ruby v1.8.6-399:

# ruby -v  ruby 1.8.6 (2010-02-05 patchlevel 399) [x86_64-linux]  

Ruby comes from the ELFF repo:

# rpm -qi ruby  Name        : ruby                         Relocations: (not relocatable)  Version     : 1.8.6.399                         Vendor: Bravenet ELFF <elff@bravenet.com>  Release     : 2.el5                         Build Date: Fri Apr 16 18:53:48 2010  Install Date: Thu Mar 24 11:23:48 2011         Build Host: el-build.local  Group       : Development/Languages         Source RPM: ruby-1.8.6.399-2.el5.src.rpm  Size        : 1738695                          License: Ruby or GPLv2  Signature   : DSA/SHA1, Fri Apr 16 19:07:49 2010, Key ID 551751dfe8b071d6  Packager    : Bravenet ELFF <elff@bravenet.com>  

I did a little digging and can see the exception getting thrown when God tries to load the Netlink event handler:

no such file to load -- netlink_handler_ext   

What could possibly be different between my servers? Am I missing something simple?

VSFTPD Virtual (Guest) Users with @ in username

Posted: 28 Aug 2021 10:09 PM PDT

I've setup VSFTPD so when a user connects it'll use a user_config_dir search for that connected user and setup a chroot guest session (since there are multiple FTP accounts belonging to multiple users on the server). This works fine with user names that have no special characters. To avoid collisions on usernames I'm setting up each username with a postfix '@domain.tld' - however, the custom rules in user_config_dir don't load when the user has an @ symbol in the name. Is there a way around this in VSFTPD - or a setting that needs to be switched?

vsftpd.conf

listen=YES  anonymous_enable=NO  local_enable=YES  guest_enable=YES  write_enable=YES  local_umask=022  dirmessage_enable=YES  use_localtime=YES  xferlog_enable=YES  chroot_local_users=YES  pam_service_name=scftp  user_config_dir=/etc/vsftpd/virtual  

pam.d/scftp

auth required /lib/security/pam_userdb.so db=/etc/vsftpd/vsftpd_login  account required /lib/security/pam_userdb.so db=/etc/vsftpd/vsftpd_login  

vitrtual/usernamewithoutspecialchars

write_enable=YES  anon_mkdir_write_enable=YES  anon_other_write_enable=YES  anon_upload_enable=YES  local_root=/home/marco  chroot_local_user=YES  dirlist_enable=YES  download_enable=YES  guest_username=marco  

virtual/user@domain.tld

write_enable=YES  anon_mkdir_write_enable=YES  anon_other_write_enable=YES  anon_upload_enable=YES  local_root=/home/marco  chroot_local_user=YES  dirlist_enable=YES  download_enable=YES  guest_username=marco  

It really just seems it won't match the FTP user user@domain.tld to the proper virtual file - while usernamewithoutspecialchars works just fine.

No comments:

Post a Comment