Saturday, February 5, 2022

Recent Questions - Server Fault

Recent Questions - Server Fault


Prevent ufw to log lines based on string/keyword?

Posted: 05 Feb 2022 04:31 AM PST

Is it possible to prevent certain repeating lines to be logged in ufw log?

I basically wants to stop all lines containing UFW BLOCK & SRC=192.168.0.202

Full line example:

Feb  5 13:03:38 ddomain kernel: [173885.985537] [UFW BLOCK] IN= OUT=enp0s3 SRC=192.168.0.202 DST=192.168.0.1 LEN=328 TOS=0x00 PREC=0x00 TTL=64 ID=5497 DF PROTO=UDP SPT=68 DPT=67 LEN=308  

I've figured out that it has something to do with rsyslog and the configuration file 20-ufw.conf

# Log kernel generated UFW log messages to file  :msg,contains,"[UFW " /var/log/ufw.log    # Uncomment the following to stop logging anything that matches the last rule.  # Doing this will stop logging kernel generated UFW log messages to the file  # normally containing kern.* messages (eg, /var/log/kern.log)  & stop  

and maybe add something like :msg,not contains,"SRC=192.168.0.202" ??

segfault in system log

Posted: 05 Feb 2022 03:41 AM PST

I'm seeing these kind of msgs in the system log

kernel: CLIDaemon[16777]: segfault at ece85260 ip 00000000ece85260 sp 00000000ffc8c64c error 14  

I can't find out what is CLIDaemon, anyone out there that can help find out what it is that's segfaulting and find the source of the segfault

Linux Vmware import Networksettings

Posted: 05 Feb 2022 02:19 AM PST

I have to use Vmware. I do not want to use Windows. So I installed Vmwareplayer on my fedora machine. It works. Good. Now I need to import a lot of VMs and Networks from a backup. The VMs are running. But: how can I import Networksettings?

I got a binary registry file called "Networksettings":

[user]$ file Networksettings  Networksettings: MS Windows registry file, NT/2000 or above  

Now in the Windows version of Vmwareplayer there is this "import" button. In the Linux version, when I start sudo vmware-netcfg I could not find such an import button... So I search around in /usr/bin/vmware-* and found out about this command and its arguments vmware-networks --migrate-network-settings <networkettingsfile>. Then I logged into the root shell and got this:

 [root]$ vmware-networks --migrate-network-settings Networksettings  Stopped all configured services on all networks  Failed to import configuration information from backup file Networksettings  

Well. After that I even converted the file "Networkstetings" to xml-file and tried to import that, but no success either.

What should I do?
I'm thinking of setting up a dual boot... but I would rather just stick with Linux.

How to resolve list of zones locally

Posted: 05 Feb 2022 01:52 AM PST

My goal is to use some of the rate-limited DNS BLs, on a system configured to use public DNS. Environment includes configured BIND and systemd.

Is there a way / how do I do that by using /etc/resolv.conf - /run/systemd/resolve/stub-resolv.conf? It is important that whole zone including all the subdomains is set to be resolved locally.

For example: dig xxxx.xxxx.xxxx.xxxx.zen.spamhaus.org does not work while dig @127.0.0.53 does, for the same record.

Reasons for having several data marts instead of only one central data warehouse. What are the tradeoffs here?

Posted: 05 Feb 2022 01:00 AM PST

What are the reasons for having several data marts instead of only one central data warehouse. What are the tradeoffs here?

According to my understanding, one of the reasons should be the following:

  • If one of the many data marts loses its connection, breaks down or other data marts are available. In the case of the data warehouse system, it would be directly unavailable and the employees would no longer be able to access the system.

Apache2 SSL only works when virtualhost is removed?

Posted: 05 Feb 2022 12:59 AM PST

I'm making a website hosted at sparrowthenerd.space, and I'm trying to have it use multiple subdomains so I can run NextCloud, OctoPrint, and a general webpage all from the same IP address. As I understand, this can be accomplished with VirtualHosts in Apache2. However, unless I remove the virtualhost tag from my conf file (below), I get an SSL Handshake Error with CloudFlare enabled, and an SSL protocol error without it.

I am using Apache2 v2.4.52 on Debian 11 Bullseye. The web server is self-hosted, and uses NodeJS on port 9999 by proxy (I think that's the right terminology?).

#<VirtualHost xxx:xx:xx:xxx:443>          ServerAdmin webmaster@localhost          ServerName sparrowthenerd.space          DocumentRoot /var/www/sparrowthenerd            ProxyPass /.well-known/ !          ProxyPass / http://localhost:9999/          ProxyPassReverse / http://localhost:9999/          ProxyPreserveHost On            SSLEngine on          SSLProtocol all -SSLv2          SSLCipherSuite HIGH:MEDIUM:!aNULL:!MD5          SSLCertificateFile /etc/apache2/ssl/sparrowthenerd.space.pem          SSLCertificateKeyFile /etc/apache2/ssl/sparrowthenerd.space.key              ErrorLog ${APACHE_LOG_DIR}/error.log          CustomLog ${APACHE_LOG_DIR}/access.log combined            <Directory /var/www>                  AllowOverride none                    Order Allow,Deny                  Allow from all          </Directory>  #</VirtualHost>  

When the virtualhost tags are uncommented, I get the error. When they are commented, I do not, but I also then can't add extra subdomains. I am using the CloudFlare proxy servers with a Cloudflare SSL Certificate. Please let me know if you need more information, I'm happy to provide it!

Domain registrar allows @ CNAME for root domain, is this reliable?

Posted: 05 Feb 2022 12:43 AM PST

Due to a gap in my knowledge I set up a Windows VM on Azure then I went to Namecheap and registered a domain. Somewhere I got the idea to put in a CNAME record against @ and the FQDN and it works.

In my name records there are literally 2 entries @ and www for CNAME against the Azure VM FQDN. Everything works sweetly.

Yesterday, the guy I'm doing work for went to switch over the name servers on the planned domain which was not at Namecheap (I was using a dummy domain I registered at Namecheap) and couldn't do what I did, and we spent a few hours looking into it.

Today, I used a tool to look up the site and it appears that Namecheap uses the domain I provided to lookup the IP address and enters an A record against that IP, but it doesn't appear on the management page. So it's done in the background. Yesterday I reset the VM and the IP changed and the domain was back up in a couple of minutes.

How normal is this? How stable is this? I don't have a dedicated IP on Azure.

I now understand that @ on non-www domain is not normal (i.e. can't typically place @ against a CNAME record). But are there reputable domain hosts that are providing this service for free? Is it something I can search for? (Secondary problem is Namecheap is currently not willing to take on transfer of a this domain because apparently .com.au domains are tricky to transfer).

If this is rare, are my only options paying Azure for a static IP and using A records?

How does a competent System Manager track ALL the Python packages their users install?

Posted: 05 Feb 2022 01:49 AM PST

It recently became painfully obvious that I just don't know how I'm supposed to manage Python packages on my systems.

I simply MUST be able to know what all is installed and under what user IDs, since the Python community is coached that nothing should be installed as Root. . . . This leaves me with a serious problem!

In this instance, I'm using Fedora distributions, MOSTLY but not exclusively Fedora Server (all some form of Fedora Core) BUT, the question applies for other distributions, too, I'm sure.

Ultimately, the question is; how is a system manager supposed to deal with this PROPERLY?

Do note that I'm NOT asking about how to install Python itself. And, SOME packages of software that use Python have their own packages. And it was one of these that prompted this sudden discovery of the risk I have; Mailman3 is just such a package but installing it didn't also install all the needed sub-packages, so there were then some "pip install"s needed for those. And on discussing it with a colleague, he admitted screwing up a system that I am responsible for managing, only to learn he screwed it up via a Python package he pip installed...

...I feel like I may be forced to implement security so individual users CANNOT install Python packages and thus increase my own workload, not only to do that, but then to install packages they want FOR them and all the hassles that entails. I'm hoping I'm just clueless about some "feature" of Python! Otherwise, I think Python needs a serious re-think about how it does what it does as it surely appears to be very unfriendly for the system manager(s).

Gravitee using full DN in memberUid when searching for groups in LDAP

Posted: 05 Feb 2022 03:51 AM PST

I have set up Gravitee APIM 3x (gateway, rest-api, console and portal). This work fine. When trying to replace the memory authentication with LDAP (FreeIPA) authentication, I am able to get the service to log users in, but they are not given a role.

The reason for this is that is, it is using the full DN of the user in the memberUid field, which only has a username in it.

Feb 04 16:27:46 somehost.somedomain.com gravitee[22030]: 16:27:46.646 [gravitee-listener-44] DEBUG o.s.s.l.u.DefaultLdapAuthoritiesPopulator - Searching for roles for user 'my_user', DN = 'uid=my_user,cn=users,cn=accounts,dc=somedomain,dc=com', with filter (&(objectClass=posixGroup)(memberUid={0})) in search base 'cn=groups,cn=compat'  Feb 04 16:27:46 somehost.somedomain.com gravitee[22030]: 16:27:46.647 [gravitee-listener-44] DEBUG o.s.s.l.SpringSecurityLdapTemplate - Using filter: (&(objectClass=posixGroup)(memberUid=uid=my_user,cn=users,cn=accounts,dc=somedomain,dc=com))  Feb 04 16:27:46 somehost.somedomain.com gravitee[22030]: 16:27:46.713 [gravitee-listener-44] DEBUG o.s.s.l.u.DefaultLdapAuthoritiesPopulator - Roles from search: []  

I have created 4 groups in IPA, which correspond to users, admins, publishers and consumers roles. Each of these groups have members and I can do manual search using ldapsearch to confirm this should be working if Gravitee had used the uid instead of the whole DN.

security:    # When using an authentication providers, use trustAll mode for TLS connections    trustAll: true    providers:  # authentication providers      - type: ldap        context:          username: ""          password: ""          url: "ldaps://<LDAP_SERVER_HOSTNAME>:<LDAP_PORT>/dc=somedomain,dc=com"          base: "dc=somedomain,dc=com"        authentication:          user:            # Search base for user authentication. Defaults to "". Only used with user filter.            # It should be relative to the Base DN. If the whole DN is o=user accounts,c=io,o=gravitee then the base should be like this:            base: "cn=users,cn=accounts"            # The LDAP filter used to search for user during authentication. For example "(uid={0})". The substituted parameter is the user's login name.            filter: "(&(objectClass=posixAccount)(uid={0}))"            # Specifies the attribute name which contains the user photo (URL or binary)            #photo-attribute: "jpegPhoto"          group:            # Search base for groups searches. Defaults to "". Only used with group filter.            # It should be relative to the Base DN. If the whole DN is o=authorization groups,c=io,o=gravitee then the base should be like this:            base: "cn=groups,cn=compat"            filter: "(&(objectClass=posixGroup)(memberUid={0}))"            role:              attribute: "cn"              mapper: {                gt_consumer: API_CONSUMER,                gt_publisher: API_PUBLISHER,                gt_admins: ADMIN,                gt_users: USER              }        lookup:          # allow search results to display the user email. Be careful, It may be contrary to the user privacy.          allow-email-in-search-results: true          user:            # Search base for user searches. Defaults to "". Only used with user filter.            # It should be relative to the Base DN. If the whole DN is o=user accounts,c=io,o=gravitee then the base should be like this:            base: "cn=users,cn=accounts"            # The LDAP filter used to search for user during authentication. For example "(uid={0})". The substituted parameter is the user's login name.            filter: "(&(objectClass=posixAccount)(uid={0}))"  

How can I resolve this?

Nginx Proxy Manager: cannot create SSL certificates

Posted: 04 Feb 2022 11:59 PM PST

I suddenly can't create new SSL certificates (Letsencrypt) in "Nginx Proxy Manager". About 6 months ago I could still create some. But now all certificates have expired and I have to renew them.

Error message: "Internal Error".


Network setup:

NPM sits behind a router and has the internal IP 10.0.0.10. Port 80/TCP and 443/TCP are forwarded to NPM. All websites are accessible from the WAN!

I can access my websites behind NPM via port 80/TCP and 443/TCP. Port 443/TCP only without a valid certificate.

NPM is only used for IPv4 connections. The websites can be reached via IPv6 without NPM (direct connection).

IPv4 = via NPM IPv6 = direct connection

Why can't I create new certificates via Letsencrypt?

Screenshot Certificate

curl: (60) server certificate verification failed CRLfile: none

Posted: 05 Feb 2022 04:04 AM PST

I'm slowly transitioning from an exclusive developer role and into more of a hybrid DevOps role at my company. Which means I'm new to a lot of this, please go easy on me... :-p

My client's server is running Ubuntu 16.04, with PHP 5.6.4 and there is a function in their site's administrative portal that runs a curl command (essentially) back to itself for some sort of file syncing. And it's been failing for some time (a few weeks/months). The problem (I think) is that certificate verification is failing, and thus, the function is dying on the vine.

When I ssh into the server, I can easily curl out to anywhere with no issues (Google, example.org, etc...). But trying to just a basic curl to the site's main url borks.

$ curl -v https://www.[my-site-name].com    *   Trying [my-site-IP]...  * Connected to [my-site-name] ([my-site-IP]) port 443 (#0)  * found 258 certificates in /etc/ssl/certs/ca-certificates.crt  * found 908 certificates in /etc/ssl/certs/  * ALPN, offering http/1.1  * SSL connection using TLS1.2 / ECDHE_RSA_AES_128_GCM_SHA256  * server certificate verification failed. CAfile: /etc/ssl/certs/ca-certificates.crt CRLfile: none  * Closing connection 0  curl: (60) server certificate verification failed. CAfile: /etc/ssl/certs/ca-certificates.crt CRLfile: none  More details here: http://curl.haxx.se/docs/sslcerts.html    curl performs SSL certificate verification by default, using a "bundle"   of Certificate Authority (CA) public keys (CA certs). If the default   bundle file isn't adequate, you can specify an alternate file   using the --cacert option.  If this HTTPS server uses a certificate signed by a CA represented in   the bundle, the certificate verification probably failed due to a   problem with the certificate (it might be expired, or the name might   not match the domain name in the URL).  If you'd like to turn off curl's verification of the certificate, use   the -k (or --insecure) option.  

I know that I can run curl with -k for it to be insecure, but I'm hesitant to do so. I guess the first question is, should I not worry about running this with the insecure flag since it's technically not leaving the server at all? I've tested this exact same curl command on one of our newer boxes running Ubuntu 18.x and also on a DigitalOcean running v20 with no issues at all -- both external and internal curls worked great.

I can even be on another server, and curl back to my one experiencing issues, and that is working fine too.

I've tried everything I could think of (which admittedly isn't much) and nothing seems to be working.

  • run updates for curl and certbot packages
  • forcing the update-ca-certificates
  • added /etc/ssl/certs/cacert.pem to both the curl.cainfo and openssl.cafile vars in php.ini

I know this probably doesn't matter, but just for completeness, I've also run the site through various online verification services:

All came back with positive results. The only negative (I guess) is that SSLLabs graded us with a 'B' because apparently TLS 1.0 is still enabled.


Any help would be greatly appreciated. I feel like reading the docs mentioned in the failure warning isn't really all that helpful.

Suggestions / tips / tricks ??

1,000 thank you's in advance!

ping dns works but nslookup fails?

Posted: 05 Feb 2022 12:32 AM PST

I have an issue with DNS in my current lab environment, this happened after a snapshot recovery on the DC image :

Following Screen shows that i can ping google.com but nslookup google.com fails, i have never seen such behaviour before, how is it even possible ?

Also i am still able to ping google.com even after an ipconfig /flushdns

DC still has internet connectivity but all the joined servers lost internet connectivity. Joined servers can still reach and resolve DC name.

Any idea how i can start to fix this issue ?

How do you upgrade the Ubuntu Version without upgrading the packages?

Posted: 05 Feb 2022 03:54 AM PST

I need to upgrade the Ubuntu Version on my servers, but I do not want to upgrade the installed packages. Is this possible?

How to create Cloud Run mapping to wildcard subdomain

Posted: 04 Feb 2022 11:21 PM PST

I host my app with cloud run and use domain from GoDaddy. It is work perfectly. Now I want to make possible have url address for every users account like user1.myapp.com, user2.myapp.com. How can I do it with cloud run and GoDaddy?enter image description here

Set MacVLAN in Portainer to get IP from DHCP server

Posted: 05 Feb 2022 04:07 AM PST

I have a Portainer VM with 3 interfaces attached (eth0-eth2). I would like the containers to request IP from the DHCP Server that is located on each interface at x.x.x.1.

Lets take as an example eth0:

The subnet of this interface is 172.16.0/27 and I would like my containers get in IP in the range of 172.16.0.17-30. Under Portainer networks, I added another network with the following settings:

  • Name: 172Config
  • Driver: macvlan Parent card: eth0 Subnet: 172.16.0/27
  • Gateway: 172.16.1
  • IP Range: 172.16.16/28 ( I would like half of the subnet hosts to be provided to containers

Then I added another network:

  • Name: 172Network
  • Driver: macvlan
  • Macvlan configuration: Selected creation and then 172Config
  • Enable manual container attachment : True

Then I attached 172Network to a new httpd container but it doesnt work. I get the correct IP on the containers but I cant ping or establish any kind of connection to them. Any Portainer guru can advice whats the right way to do this?

Rsync always failing at the same point

Posted: 05 Feb 2022 03:20 AM PST

I made a script for my Proxmox host to copy its daily backups to another machine (running Windows Server 2019).

The script mounts a CIFS share and runs this rsync command:

rsync -aqzP --delete --delete-excluded /mnt/raid/ /mnt/backups --exclude "*.log"  

It runs for about 6 minutes and then when it tries to copy the vzdump-qemu-100-2021_05_16-00_00_03.vma.zst file, it throws an error:

rsync: [receiver] close failed on "/mnt/backups/backups/dump/.vzdump-qemu-100-2021_05_16-00_00_03.vma.zst.nrDEvQ": Broken pipe (32)  rsync error: error in file IO (code 11) at receiver.c(871) [receiver=3.2.3]  rsync: [sender] write error: Broken pipe (32)  

What could be the problem?

On the windows machine there's all the needed space for the copy.

podman: rootless container: permissions for container user

Posted: 04 Feb 2022 11:29 PM PST

In nginx podman container nginx user is used to run nginx server.

On the host machine ls -alh:

drwxrwx--- 2 myuser myuser 4.0K Aug 10 22:23 .  drwxrwx--- 3 myuser myuser 4.0K Aug 10 22:59 ..  -rw-rw---- 1 myuser myuser   46 Aug 10 22:24 .htpasswd  

The same folder inside container ls -alh:

drwxrwx--- 2 root root 4.0K Aug 10 22:23 .  drwxr-xr-x 1 root root 4.0K Aug 10 11:05 ..  -rw-rw---- 1 root root   46 Aug 10 22:24 .htpasswd  

nginx user inside container can't access .htpasswd because of o-rwx.

Question: what is the commonly used pattern to handle this kind of cases in rootless container universally? Maybe it is possible to create group (used later as file-group owner) which gathers all ranges from subuid/subgid for particular host user - but how to achieve this?

QLogic Fiber Card not working with HYVE-Zeus Server

Posted: 05 Feb 2022 12:01 AM PST

I have a QLogic QLE2564 Fiber Card that doesn't seem to be working with my server. I am running windows server 2019 on a Hyve Zeus V1 with a supermicro X9DRD-LF motherboard. When installed, the card is not detected in device manager or when the system boots. All the lights on the card remain on even after boot. I have the card installed in the only pcie slot on the board with a riser. I tested it without the riser and still get the same results. I disabled the onboard gig ports through the bios and rebooted. That did nothing. I don't have another machine to test the card itself. Is there something in the bios I'm missing or needs a flash? Is the card not compatible or could it be dead?

duplicity backup fails: "Private key file is encrypted"

Posted: 04 Feb 2022 11:59 PM PST

I am trying to upload my first duplicity backup to a remote server and it fails. It is the first time I am using the tool and it did not work before.

duplicity /home/me/Documents/ scp://me@<ip-address>//home/me/bak  

This is the answer I get:

BackendException: ssh connection to me@<ip-address>:22 failed: Private key file is encrypted  DEBUG:duplicity:BackendException: ssh connection to me@<ip-address>:22 failed: Private key file is encrypted  

Logging in via ssh works fine.

I have searched the web but could not find any possible solution. My only hunch is that it could be related to my .ssh directory: which contains multiple identities and thus multiple .pub and key files

EDIT: with -v 9 I get the following output (a lot more, but just showing last part):

ssh: Kex agreed: curve25519-sha256@libssh.org  DEBUG:sshbackend:Kex agreed: curve25519-sha256@libssh.org  ssh: HostKey agreed: ssh-ed25519  DEBUG:sshbackend:HostKey agreed: ssh-ed25519  ssh: Cipher agreed: aes128-ctr  DEBUG:sshbackend:Cipher agreed: aes128-ctr  ssh: MAC agreed: hmac-sha2-256  DEBUG:sshbackend:MAC agreed: hmac-sha2-256  ssh: Compression agreed: none  DEBUG:sshbackend:Compression agreed: none  ssh: kex engine KexCurve25519 specified hash_algo <built-in function openssl_sha256>  DEBUG:sshbackend:kex engine KexCurve25519 specified hash_algo <built-in function openssl_sha256>  ssh: Switch to new keys ...  DEBUG:sshbackend:Switch to new keys ...  Using temporary directory /tmp/duplicity-O8U6r5-tempdir  Level 5:duplicity:Using temporary directory /tmp/duplicity-O8U6r5-tempdir  Backend error detail: Traceback (innermost last):    File "/usr/bin/duplicity", line 1581, in <module>      with_tempdir(main)    File "/usr/bin/duplicity", line 1567, in with_tempdir      fn()    File "/usr/bin/duplicity", line 1406, in main      action = commandline.ProcessCommandLine(sys.argv[1:])    File "/usr/lib/python2.7/site-packages/duplicity/commandline.py", line 1140, in ProcessCommandLine      backup, local_pathname = set_backend(args[0], args[1])    File "/usr/lib/python2.7/site-packages/duplicity/commandline.py", line 1015, in set_backend      globals.backend = backend.get_backend(bend)    File "/usr/lib/python2.7/site-packages/duplicity/backend.py", line 223, in get_backend      obj = get_backend_object(url_string)    File "/usr/lib/python2.7/site-packages/duplicity/backend.py", line 209, in get_backend_object      return factory(pu)    File "/usr/lib/python2.7/site-packages/duplicity/backends/ssh_paramiko_backend.py", line 240, in __init__      self.config['port'], e))   BackendException: ssh connection to me@<ip-address>:22 failed: Private key file is encrypted  

doesn't tell me much more though

MySQL 200% CPU Usage

Posted: 05 Feb 2022 04:00 AM PST

in my server MySQL taking more than 200% CPU usage. I'm running java war file in tomcat server that uses hibernate JPA to perform MySQL operation I did slowSQLQuerylogs also but there is nothing that taking more than 2 seconds but a number of queries getting performed are high because of spring-boot JPA.

OS:  Ubuntu 18.04.1 LTS    H/W specification like:   memory: 16GB   Architecture: x86_64   CPU op-mode(s): 32-bit, 64-bit   Byte Order: Little Endian   CPU(s): 4   On-line CPU(s) list: 0-3   Thread(s) per core: 1   Core(s) per socket: 4   Socket(s): 1   NUMA node(s): 1   Vendor ID: GenuineIntel   CPU family: 6   Model: 79   Model name: Intel(R) Xeon(R) CPU E5-2686 v4 @ 2.30GHz   CPU MHz: 2300.033  

mysqld.cnf file

key_buffer_size     = 128M    

max_allowed_packet = 16M
thread_stack = 192K
thread_cache_size = 8
myisam-recover-options = BACKUP
max_connections = 500
query_cache_limit = 8M
query_cache_size = 16M
log_error = /var/log/mysql/error.log
expire_logs_days = 10
max_binlog_size = 100M
innodb_buffer_pool_size=8G
innodb_buffer_pool_instances=8
innodb_lru_scan_depth=100

when I run top command in ubuntu it showing me MySQL using more than 200% CPU usage sometimes.

> top    23:19:47 up 4:42, 1 user, load average: 0.86, 0.86, 0.83 Tasks: 167 total, 1 running, 116 sleeping, 0 stopped, 0 zombie %Cpu(s): 45.9 us, 4.8 sy, 0.0 ni, 47.5 id, 0.6 wa, 0.0 hi, 0.8 si, 0.4 st KiB Mem : 16424600 total, 4822956 free, 4830684 used, 6770960 buff/cache KiB Swap: 0 total, 0 free, 0 used. 11290580 avail Mem    PID   USER   PR  NI  VIRT     RES     SHR   S  %CPU   %MEM  TIME+     COMMAND  8675  mysql  20  0   3262732  856304  16244 S  162.8  5.2   74:18.57  mysqld  

MySQL performance tuning giving me this results

[--] Skipped version check for MySQLTuner script  Please enter your MySQL administrative login: perimetrix  Please enter your MySQL administrative password: [OK] Currently running supported MySQL version 5.7.27-0ubuntu0.18.04.1  [OK] Operating on 64-bit architecture    -------- Log file Recommendations ------------------------------------------------------------------  [--] Log file: /database/mysql/mypm-aws.err(0B)  [!!] Log file /database/mysql/mypm-aws.err doesn't exist  [!!] Log file /database/mysql/mypm-aws.err isn't readable.    -------- Storage Engine Statistics -----------------------------------------------------------------  [--] Status: +ARCHIVE +BLACKHOLE +CSV -FEDERATED +InnoDB +MEMORY +MRG_MYISAM +MyISAM +PERFORMANCE_SCHEMA  [--] Data in InnoDB tables: 8.0G (Tables: 1130)  [OK] Total fragmented tables: 0    -------- Analysis Performance Metrics --------------------------------------------------------------  [--] innodb_stats_on_metadata: OFF  [OK] No stat updates during querying INFORMATION_SCHEMA.    -------- Security Recommendations ------------------------------------------------------------------  [OK] There are no anonymous accounts for any database users  [!!] failed to execute: SELECT CONCAT(user, '@', host) FROM mysql.user WHERE (IF(plugin='mysql_native_password', authentication_string, password) = '' OR IF(plugin='mysql_native_password', authentication_string, password) IS NULL) AND plugin NOT IN ('unix_socket', 'win_socket', 'auth_pam_compat')  [!!] FAIL Execute SQL / return code: 256  [OK] All database users have passwords assigned  [--] Bug #80860 MySQL 5.7: Avoid testing password when validate_password is activated    -------- CVE Security Recommendations --------------------------------------------------------------  [--] Skipped due to --cvefile option undefined    -------- Performance Metrics -----------------------------------------------------------------------  [--] Up for: 1d 2h 21m 0s (103M q [1K qps], 10K conn, TX: 959G, RX: 285G)  [--] Reads / Writes: 99% / 1%  [--] Binary logging is disabled  [--] Physical Memory     : 15.7G  [--] Max MySQL memory    : 8.7G  [--] Other process memory: 0B  [--] Total buffers: 8.2G global + 1.1M per thread (500 max threads)  [--] P_S Max memory usage: 72B  [--] Galera GCache Max memory usage: 0B  [OK] Maximum reached memory usage: 8.4G (53.51% of installed RAM)  [OK] Maximum possible memory usage: 8.7G (55.48% of installed RAM)  [OK] Overall possible memory usage with other process is compatible with memory available  [OK] Slow queries: 0% (0/103M)  [OK] Highest usage of available connections: 40% (202/500)  [OK] Aborted connections: 0.06%  (7/10980)  [!!] name resolution is active : a reverse name resolution is made for each new connection and can reduce performance  [!!] Query cache may be disabled by default due to mutex contention.  [!!] Query cache efficiency: 0.0% (0 cached / 101M selects)  [OK] Query cache prunes per day: 0  [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 187K sorts)  [!!] Joins performed without indexes: 53751600  [OK] Temporary tables created on disk: 0% (108 on disk / 70K total)  [OK] Thread cache hit rate: 98% (202 created / 10K connections)  [!!] Table cache hit rate: 2% (2K open / 83K opened)  [OK] Open file limit used: 0% (0/5K)  [OK] Table locks acquired immediately: 100% (237 immediate / 237 locks)    -------- Performance schema ------------------------------------------------------------------------  [--] Memory used by P_S: 72B  [--] Sys schema is installed.    -------- ThreadPool Metrics ------------------------------------------------------------------------  [--] ThreadPool stat is disabled.    -------- MyISAM Metrics ----------------------------------------------------------------------------  [!!] Key buffer used: 18.2% (24M used / 134M cache)  [OK] Key buffer size / total MyISAM indexes: 128.0M/43.0K  [OK] Read Key buffer hit rate: 96.5% (258 cached / 9 reads)    -------- InnoDB Metrics ----------------------------------------------------------------------------  [--] InnoDB is enabled.  [--] InnoDB Thread Concurrency: 0  [OK] InnoDB File per table is activated  [!!] InnoDB buffer pool / data size: 8.0G/8.0G  [!!] Ratio InnoDB log file size / InnoDB Buffer pool size (1.171875 %): 48.0M * 2/8.0G should be equal to 25%  [OK] InnoDB buffer pool instances: 8  [--] Number of InnoDB Buffer Pool Chunk : 64 for 8 Buffer Pool Instance(s)  [OK] Innodb_buffer_pool_size aligned with Innodb_buffer_pool_chunk_size & Innodb_buffer_pool_instances  [OK] InnoDB Read buffer efficiency: 100.00% (13946932499 hits/ 13947178305 total)  [!!] InnoDB Write Log efficiency: 30.94% (117771 hits/ 380652 total)  [OK] InnoDB log waits: 0.00% (0 waits / 262881 writes)    -------- AriaDB Metrics ----------------------------------------------------------------------------  [--] AriaDB is disabled.    -------- TokuDB Metrics ----------------------------------------------------------------------------  [--] TokuDB is disabled.    -------- XtraDB Metrics ----------------------------------------------------------------------------  [--] XtraDB is disabled.    -------- Galera Metrics ----------------------------------------------------------------------------  [--] Galera is disabled.    -------- Replication Metrics -----------------------------------------------------------------------  [--] Galera Synchronous replication: NO  [--] No replication slave(s) for this server.  [--] Binlog format: ROW  [--] XA support enabled: ON  [--] Semi synchronous replication Master: Not Activated  [--] Semi synchronous replication Slave: Not Activated  [--] This is a standalone server    -------- Recommendations ---------------------------------------------------------------------------  General recommendations:      Configure your accounts with ip or subnets only, then update your configuration with skip-name-resolve=1      Adjust your join queries to always utilize indexes      Increase table_open_cache gradually to avoid file descriptor limits      Read this before increasing table_open_cache over 64:      Read this before increasing for MariaDB https://mariadb.com/kb/en/library/optimizing-table_open_cache/      This is MyISAM only table_cache scalability problem, InnoDB not affected.      See more details here: https://bugs.mysql.com/bug.php?id=49177      This bug already fixed in MySQL 5.7.9 and newer MySQL versions.      Beware that open_files_limit (5000) variable      should be greater than table_open_cache (2000)      Before changing innodb_log_file_size and/or innodb_log_files_in_group read this:  Variables to adjust:      query_cache_size (=0)      query_cache_type (=0)      query_cache_limit (> 8M, or use smaller result sets)      join_buffer_size (> 256.0K, or always use indexes with JOINs)      table_open_cache (> 2000)      innodb_buffer_pool_size (>= 8.0G) if possible.      innodb_log_file_size should be (=1G) if possible, so InnoDB total log files size equals to 25% of buffer pool size.    

I'm not able to find what causes the problem.

Jenkins build email notification by parsing log

Posted: 05 Feb 2022 01:01 AM PST

I am using Jenkins to deploy in multiple servers. There are 4 backend servers are running. I have a script called deploy.sh in each server. At the time of deployment, Jenkins create a ssh session and run the deploy.sh script in each server one by one. But, sometimes it shows error in build console log "Errno : can not allocate memory" for few servers and then move to the next servers. I am looking for a solution, which will send me an email whenever there is a "error" keyword in the Jenkins log.

I found this link :- https://wiki.jenkins.io/display/JENKINS/Log+Parser+Plugin

But, it doesn't provide email notification and I am looking for a better solution.

VPN from AWS Workspace

Posted: 05 Feb 2022 03:04 AM PST

I have recently created a Win10 AWS WorkSpace and I want to connect from there to my public VPN server. I am using the same settings as on my home internet but in AWS the connections times out. Is something network related that prevents the L2TP or PPTP VPN connections. Can I configure or setup additional network settings on AWS? thanks,

Nginx: connect() to xxx.xxx.xxx.184:3000 failed (22: Invalid argument) while connecting to upstream,

Posted: 05 Feb 2022 02:02 AM PST

This is my first time asking question on serverfault. If I miss something, please do let me know!

I have been trying to solve this problem for two days now. Still can't find a solution. Here's my scenario:

Server A

  • Node app A on port 3000
  • Nginx on port 80
  • when I go to /hello, app B should be served from Server B

Server B

  • Node app B on port 3000

When I visit /hello, I get 502 Bad Gateway.

In my error log, I see this:

connect() to 162.243.104.184:3000 failed (22: Invalid argument) while connecting to upstream, client: 75.72.170.8, server: 107.170.64.149, request: "GET /hello/ HTTP/1.1", upstream: "http://162.243.104.184:3000/hello/", host: "107.170.64.149"  

Here's my nginx conf for Server A:

server {    listen 80 default_server;    listen [::]:80 default_server ipv6only=on;      server_name 107.170.64.149;      location / {      proxy_http_version 1.1;      proxy_set_header X-Real-IP $remote_addr;      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;      proxy_set_header X-NginX-Proxy true;      proxy_set_header Upgrade $http_upgrade;      proxy_set_header Connection 'upgrade';      proxy_set_header Host $host;      proxy_pass http://107.170.64.149:3000;      proxy_redirect off;    }      location /hello {      proxy_set_header X-Real-IP $remote_addr;      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;      proxy_set_header X-NginX-Proxy true;      proxy_set_header Host $host;      proxy_pass http://162.243.104.184:3000;      proxy_bind 162.243.104.184;      proxy_redirect off;    }  }  

In my /etc/sysctl.conf, I added:

net.ipv4.ip_nonlocal_bind=1

to allow processes to bind to the non-local address because I previously got this error:

bind(162.243.104.184) failed (99: Cannot assign requested address) while connecting to upstream, client: 75.72.170.8, server: 107.170.64.149, request: "GET /hello/ HTTP/1.1", upstream: "http://162.243.104.184:3000/hello/", host: "107.170.64.149"  

Any guidance or help would be much appreciated! Thank you!

Edit 1

My new nginx config:

server {          listen 80 default_server;          listen [::]:80 default_server ipv6only=on;            server_name 107.170.39.161;            location / {            proxy_http_version 1.1;            proxy_set_header X-Real-IP $remote_addr;            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;            proxy_set_header X-NginX-Proxy true;            proxy_set_header Upgrade $http_upgrade;            proxy_set_header Connection 'upgrade';            proxy_set_header Host $host;            proxy_pass http://107.170.39.161:3000;            proxy_redirect off;          }            location /hello {            rewrite /hello(.*) /$1 last;            proxy_set_header X-Real-IP $remote_addr;            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;            proxy_set_header X-NginX-Proxy true;            proxy_set_header Host $host;            proxy_pass http://162.243.33.9:3000;            proxy_redirect off;          }  }  

Edit 2

Server A

  • http://107.170.113.66/
  • this server serves appA
  • when I go to /, I should see SSR Landing Page which is served from appA
  • when I go to /hello, I should supposedly see hello which is served from server B. But instead I got There is no route for the path: /hello in the browser console and I didn't see any request to server B

Server B

I apologize for changing the server address so often.

Changing source MAC address of routed packets

Posted: 05 Feb 2022 12:06 AM PST

I have linux box with one network interface and IP forwarding enabled. Let's say my IP address is 192.168.1.1 and MAC is 11:11:11:11:11:11. When a packet which is not targeted for my host arrives, it gets routed by the kernel and the outgoing packet has source MAC address 11:11:11:11:11:11, i.e. the MAC address of my host. I want to change this behavior and set a predefined source MAC address for all routed packets. Is it possible to achieve this with the standard networking tools available in Linux? If not, is it possible to implement this in user space with libraries like pcap?

Configuration of spamassassin on Fedora 21 with Postfix

Posted: 05 Feb 2022 01:37 AM PST

I was unable to find any help anywhere on installing spamassassin on a modern Fedora system. All the advice out there is old and doesn't apply - so far as I found anyway. The official documentation does NOT pertain to Fedora — it's too generic and is missing important Fedora implementation details.

And, how it's done is not enough straight-forward that I know what to do! The most important missing link appears to be what I tell to Postfix on how to call spamassassin. However, it's also completely unclear where configuration options are to be defined.

There's a lot more that follows but MAYBE this all boils down to nobody bothering to give appropriate information on what the Postfix smtpd_milter entry should be. AND, the entry I've tried is based on the data found in '/usr/share/doc/spamass-milter-postfix' this file says the right value is unix:/run/spamass-milter/postfix/sock BUT while the directory exists, the socket entry is not there, and the error message cited below is returned.

...OK, the longer story:

I've installed these versions:

spamass-milter-0.4.0-1.fc21.x86_64  spamass-milter-postfix-0.4.0-1.fc21.noarch  spamassassin-3.4.0-13.fc21.x86_64  

And again, this is Fedora 21. The installed Postfix is:

postfix-2.11.3-1.fc21.x86_64  

When I run 'man spamassassin' it points me to /var/lib/spammassassin/3.004000, which contains 'updates_spamassassin_org.cf and a subdirectory of the same name (minus the .cf) which in turn contains a large collection (65) of .cf files. The man page also points me to /usr/share/spamassassin which in turn contains a large collection (63) of similarly named .cf files which aren't quite identical.

I sure hope I don't have to learn everything about all of these to get spamassassin working! Thankfully - but insufficiently - the man page ALSO points me to /etc/mail/spamassassin which contains, among other things, a file called local.cf. In there I found a link to a page on how to install and integrate spamassassin with postfix - here (), but that advice wasn't sufficient AND appeared to not be so clueful. For example, among the first things it suggests is to create a group and user account called "spamfilter" when, of course, the stock yum install already created the account sa-milt, whose entry appears to be the right one:

sa-milt:x:982:477:SpamAssassin Milter:/var/lib/spamass-milter:/sbin/nologin   

I imagine that there needs to be an entry made in Postfix's main.cf to create or add an entry called smtpd_milters to include whatever link is needed to tell Postfix how to call spamassassin. PRESENTLY I have an entry for openDKIM:

# This is for openDKIM - missing are clamav and spamassassin:  smtpd_milters = inet:localhost:8891  

So, it seems to me to be an error to create these user accounts. Also if these accounts ARE needed, why didn't the installation scripts already create the user and group? Additionally, it also talks about using "service" to start spamassassin, BUT, as any competent Fedora 21 System Administrator knows, you don't use "service", you use systemctl!

...Ignoring that...

The service DOES start - and stay up - using:

systemctl start spamassassin  

BUT, it doesn't work.

In particular, it seems obvious to me there needs to have a hook into postfix, but it's also obvious that the non-Fedora 21 strategy of doing that fails. We are directed to update Postfix's main.cf smtpd_milters entry - here's what they suggest:

# First entry is for openDKIM   smtpd_milters = inet:localhost:8891, unix:/run/spamass-milter/postfix/sock  

The problem is, though, that this doesn't work. We get this in the log file:

postfix/smtpd[18151]: warning: connect to Milter service unix:/run/spamass-milter/postfix/sock: No such file or directory  

Yet, I can't seem to find the CORRECT entry for smtpd_milters for this version combination!

CLEARLY there should be installation directions somewhere, but NONE of what I've found pertains to these versions. Please either tell me how this should be done or point me to where I can find a competent write-up that DOES apply! IN PARTICULAR, what's the appropriate smtpd_milters entry OR, how do I create a configuration combination (smtpd_milters / spamassassin configuration) that works?

nginx wont cache images when they are read through php files

Posted: 05 Feb 2022 04:00 AM PST

I have .php file which is loading images to hide their location. Every images are correctly caching through this directive:

location ^~ \.(jpg|jpeg|png|gif|ico|css|js)$ {              expires max;              valid_referers server_names blocked mysiteaddresishere;              if ($invalid_referer) {                 return   403;              }  }  

by the way, valid referrers doesn't work, I don't know why.

I've added ^ before ~, someone told that this should look at the longest regex, maybe it do but not with php files.

I have in my vhost something like this:

location ~ \.php$ {      try_files /a30bc49b5ff24bc50d55ba4306d73470.htm @php;  }    location @php {      try_files $uri =404;      include /etc/nginx/fastcgi_params;      fastcgi_pass 127.0.0.1:9010;      fastcgi_index index.php;      fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;      fastcgi_intercept_errors on;  }  

Dunno if this is blocking caching on my php image reader, I can't figure out how to add into this expires for images.

I've found this site: en.dklab.ru /lib/HTTP_ImageResizer/, so I've tried like this:

location /imagehi.php {          fastcgi_cache MYAPP;          fastcgi_cache_valid 200 304 404 240h;          fastcgi_cache_key "method=$request_method|ims=$http_if_modified_since|inm=$http_if_none_match|host=$host|uri=$request_uri";          fastcgi_hide_header "Set-Cookie";          fastcgi_ignore_headers "Cache-Control" "Expires";        # or use proxy_* commands if you use Apache, not FastCGI PHP  }  

But it still doesn't work. Any ideas what am I missing ?

Nginx conf

fastcgi_cache_key "$scheme$request_method$host$request_uri";    server {      listen *:80;          server_name mysite.com www.mysite.com;        root   /var/www/mysite.com/web;            index index.html index.htm index.php index.cgi index.pl index.xhtml;          error_page 400 /error/400.html;      error_page 401 /error/401.html;      error_page 403 /error/403.html;      error_page 404 /error/404.html;      error_page 405 /error/405.html;      error_page 500 /error/500.html;      error_page 502 /error/502.html;      error_page 503 /error/503.html;      recursive_error_pages on;      location = /error/400.html {            internal;      }      location = /error/401.html {            internal;      }      location = /error/403.html {            internal;      }      location = /error/404.html {            internal;          }      location = /error/405.html {            internal;      }      location = /error/500.html {            internal;      }      location = /error/502.html {            internal;      }      location = /error/503.html {            internal;      }        error_log /var/log/ispconfig/httpd/mysite.com/error.log;      access_log /var/log/ispconfig/httpd/mysite.com/access.log combined;        location ~ /\. {          deny all;          access_log off;          log_not_found off;      }        location = /favicon.ico {          log_not_found off;          access_log off;      }        location = /robots.txt {          allow all;          log_not_found off;          access_log off;      }        location /stats {            index index.html index.php;          auth_basic "Members Only";          auth_basic_user_file /var/www/clients/client0/web1/web/stats/.htpasswd_stats;      }        location ^~ /awstats-icon {          alias /usr/share/awstats/icon;      }        location ~ \.php$ {          try_files /a30bc49b5ff24bc50d55ba4306d73470.htm @php;      }        location @php {          try_files $uri =404;          include /etc/nginx/fastcgi_params;          fastcgi_pass 127.0.0.1:9010;          fastcgi_index index.php;          fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;          fastcgi_intercept_errors on;      }        location /imagehi\.php\?=.+\.(jpg|jpeg|png|gif) {          fastcgi_cache MYAPP;          fastcgi_cache_valid 200 60m;          # or use proxy_* commands if you use Apache, not FastCGI PHP      }        location /imagehi.php\?=([A-Z]|[0-9]|[a-z]|&)+\.(jpg|jpeg|png|gif)$ {                   expires max;      }      location ~ (imagehi\.php\?=.+\.(jpg|jpeg|png|gif))${ {                   expires max;      }      location ^~ \.(jpg|jpeg|png|gif|ico|css|js)$ {                   expires max;      }  

Nginx gives 404 error for rails app except the root

Posted: 05 Feb 2022 03:04 AM PST

I have an Ubuntu 12.04 LTS VPS server that serves a static website with Nginx. I would like to set up a rails application that is accesible from the subfolder 'foo'. I use Passenger for serving the rails app

That is how I configured Nginx:

worker_processes  1;    events {      worker_connections  1024;  }    http {      passenger_root /home/akarki/.rvm/gems/ruby-1.9.3-p429/gems/passenger-4.0.5;      passenger_ruby /home/akarki/.rvm/wrappers/ruby-1.9.3-p429/ruby;        server_names_hash_bucket_size 64;        include       mime.types;      default_type  application/octet-stream;        sendfile        on;        keepalive_timeout  65;      gzip  on;      gzip_disable "MSIE [1-6]\.(?!.*SV1)";      gzip_types text/plain application/xml text/css text/js text/xml application/x-javascript text/javascript application/json application/xml+rss;      charset UTF-8;      error_log /opt/nginx/logs/file.log warn;        server {          listen 80;          server_name www.domain.com;          return 301 $scheme://domain.com$request_uri;      }        server {          listen       80;          server_name  domain.com;          index  index.html index.htm;          root /srv/www/domain.com;          passenger_enabled on;          passenger_base_uri /foo;          try_files $uri.htm $uri.html $uri/ =404;            location = / {              rewrite ^ /de permanent;          }            # redirect server error pages to the static page /50x.html          error_page   500 502 503 504  /50x.html;          location = /50x.html {              root   html;          }      }  }  

The static website works as expected but the only URL of the rails app that is accessible is the root under 'http://domain.com/foo'

Any other url gives a 404 error.

Do you have any suggestion how to fix this?

Asterisk / Elastix Address Book and Call Recording

Posted: 05 Feb 2022 01:01 AM PST

I am trying to build a small asterisk based PBX using Elastix. It'll be having 4 FXO (2 nos.no ISDN, normal analog POTS, 2 nos. GSM connections using a GSM Terminal) and 4 FXS (2 IP Phones and 2 Android SIP Client).

I am confused about the following two issues and need your help:

  1. I need to record all Incoming / Outgoing Calls along with their Caller Ids, Is any special hardware needed ?

  2. I have about 5000-6000 contacts which I want to show up on my IP Phones menu, so that users can dial by selecting / searching the name / company. How can this be implemented and which is the most cost effective IP-Phone to purchase for contacts list of this size.

Thanks a lot for your time

What is the best log rotator for Python wsgi applications?

Posted: 05 Feb 2022 12:06 AM PST

I am running a wsgi based application that has concurrent users accessing it. For my logs needs I tried logrotate but found that logrotate is not too friendly to Python applications, so I tried using RotatingFileHandler and even worse found my logs all chopped up and part of it went missing!

I am considering ConcurrentRotatingFileHandler, my question is, has anyone out there experienced the same thing and better yet do you have any battle tested solution for Python wsgi, concurrently accessed applications?

Could not continue scan with NOLOCK due to data movement during installation

Posted: 05 Feb 2022 02:02 AM PST

I am running Windows Server 2008 Standard Edition R2 x64 and I installed SQL Server 2008 Developer Edition. All of the preliminary checks run fine (Apart from a warning about Windows Firewall and opening ports which is unrelated to this and shouldn't be an issue - I can open those ports).

Half way through the actual installation, I get a popup with this error:

Could not continue scan with NOLOCK due to data movement.

The installation still runs to completion when I press ok.

However, at the end, it states that the following services "failed":

database engine services sql server replication full-text search reporting services

How do I know if this actually means that anything from my installation (which is on a clean Windows Server setup - nothing else on there, no previous SQL Servers, no upgrades, etc) is missing?

I know from my programming experience that locks are for concurrency control and the Microsoft help on this issue points to changing my query's lock/transactions in a certain way to fix the issue. But I am not touching any queries?

Also, now that I have installed the app, when I login, I keep getting this message:

  TITLE: Connect to Server  ------------------------------    Cannot connect to MSSQLSERVER.    ------------------------------  ADDITIONAL INFORMATION:    A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) (Microsoft SQL Server, Error: 67)    For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&EvtSrc=MSSQLServer&EvtID=67&LinkId=20476    ------------------------------  BUTTONS:    OK  ------------------------------  

I went into the Configuration Manager and enabled named pipes and restarted the service (this is something I have done before as this message is common and not serious). I have disabled Windows Firewall temporarily. I have checked the instance name against the error logs.

Please advise on both of these errors. I think these two errors are related.

Thanks

No comments:

Post a Comment