Tuesday, January 4, 2022

Recent Questions - Server Fault

Recent Questions - Server Fault


php-fpm & Apache 2 - analysing PHP Message: logs

Posted: 04 Jan 2022 07:51 AM PST

One of my servers has recently been switched to using php-fpm.

The error logs now log 404's in a new format:

[Sun Dec 26 00:11:37.827426 2021] [proxy_fcgi:error] [pid 25239:tid  140600822003456] [client 66.249.66.136:37676] AH01071: Got error 'PHP message: File  does not exist: /ads.txt'  [Sun Dec 26 00:14:53.732771 2021] [proxy_fcgi:error] [pid 24741:tid  140601015035648] [client 207.46.13.93:9600] AH01071: Got error 'PHP message: File  does not exist: /events/view/id/633/supercharge'  

I previously used a command-line script (using awk), written by one of my colleagues many years ago, to parse the logs and extract the URLs that were 404ing and then did some manual excel work to get a tally of any addresses that were erroring but receiving a reasonable number of requests. I'm reasonably comfortable (with the awk manual) to update this script...

But, before I jump in and start editing this script I was suspecting there must be a better way to parse these large log files. Any suggestions for a better approach?

Ubuntu dns cache issues - can't connect to several hosts

Posted: 04 Jan 2022 07:41 AM PST

I can't connect to couple of hostnames. The number of addresses that I cannot connect is growing. I tried to restart DNS service and flush cases. It didn't work. How to restart the DNS cache in Ubuntu 20.03 LTS enter image description here

Name resolution between On-premises and GCP environment

Posted: 04 Jan 2022 07:40 AM PST

I have an environment similar to the diagram below:

https://cloud.google.com/dns/images/hybrid_arch_using_a_single_shared_vpc_network.svg

I followed these instructions:

https://cloud.google.com/dns/docs/best-practices#hybrid-architecture-using-single-shared-vpc-network

Steps:

  1. Set up your on-premises DNS servers as authoritative for corp.example.com

I have a DNS server configured: test.local

  1. Configure an authoritative private zone (for example, gcp.example.com) on Cloud DNS in the host project of the Shared VPC network, and set up all records for resources in that zone.

I created the DNS using the command: test.gcp

gcloud dns managed-zones create private-zone \  --description=private-zone-dns \  --dns-name=test.gcp \  --networks=vpc-network \  --visibility=private  

Type A records were created pointing to servers.

  1. Set a DNS server policy on the host project for the Shared VPC network to allow inbound DNS forwarding.

    gcloud dns policies create DNS

    --description=dnsservers

    --networks=vpc-network

    --enable-inbound-forwarding

  2. Set a DNS forwarding zone that forwards corp.example.com to the on-premises DNS servers. The Shared VPC network needs to be authorized to query the forwarding zone.

    gcloud dns managed-zones create zone-local
    --description=servers-local
    --dns-name=test.local
    --forwarding-targets=X.X.X.X,Y.Y.Y.Y
    --visibility=private
    --networks=vpc-network

  3. Set up forwarding to gcp.example.com on your on-premises DNS servers, pointing at an inbound forwarder IP address in the Shared VPC network.
gcloud compute addresses list \  --filter='purpose = "DNS_RESOLVER"' \  --format='csv(address, region, subnetwork)'  

I got the forwarding IP that was on the same network as the instances.

I added this IP in Windows Server Local, in the DNS forwarder.

  1. Make sure that DNS traffic is allowed on your on-premises firewall.

Local and GCP are allowed.

  1. In Cloud Router instances, add a custom route advertisement for the range 35.199.192.0/19 to the on-premises environment.

Added custom routing to 35.199.192.0/19 and is in the BGP table in both environments.

Results: Name resolution works only in the GCP environment, locally not responding.

Teste em um servidor na GCP: On-premise

nslookup server.test.local  Server:         127.0.0.53  Address:        127.0.0.53#53  Non-authoritative answer:  Name:   server.test.local  Address: X.X.X.X  

GCP

nslookup server1.test.gcp  Server:         127.0.0.53  Address:        127.0.0.53#53  Non-authoritative answer:  Name:   server1.test.gcp  Address: Y.Y.Y.Y  

However, when I try to do name resolution from the local environment to GCP the name is not resolved. My question is if the way it's being done is wrong or if there's still some procedure not listed in the documentation.

I've also tried using VM instances IPs to do the forwarding and it still didn't work.

Best way to create yaml files that have a variable number sections based on host

Posted: 04 Jan 2022 07:53 AM PST

I am trying to use ansible to deploy configuration files out to hundreds of machines in which different machines will have multiple iterations of specific configuration snippets. Specifically I am using the promtail log parser and different machines will have different log file locations to parse with different labels. Ideally I want to keep the ansible configuration pretty simply so I can just use pull requests to make changes to the various sections.

Initially I was going to use group_vars and have each log file location being defined in the group_var. Which works fine as long as I am only building a single log location. Once I need multiple log locations, it breaks as I will only have one value returned from group_vars.

To illustrate.

hosts:            LOGFILE1:        hosts:          app[15:16].qa2.example.com      LOGFILE2:        hosts          app[16:17].qa2.example.com        GROUP_VARS/LOGFILE1  GROUP_VARS/LOGFILE2  

I could just possibly look to iterate through each group and then append the the output to the config file but I don't see a way to do that with the template function. Ideally I could just iterate through all of the log file locations but I'm not sure how to do that.

Or maybe I could use an external variable file and then use a conditional of some sort to determine which hosts get which configuration?

Same data in the group_vars...

file: /opt/tomcat/fxcts/logs/gxxss.log  comp: TX_Tomcat  app: TX  module: GXX  pipeline_regex:  None   pipeline_vars:    - None  drop_expression: None  Multiline: None  

Here is the jinja template

scrape_configs:  - job_name: {{ module }}      pipeline_stages:          - regex:              expression: {{ pipeline_regex }}          - labels:              {% for labels in pipeline_vars -%}              {{ labels }}:              {% endfor %}  {#  This is a test #}          - timestamp:              source: date              format: 2006-01-01 15:00:00.000000          - drop:              expression: {{ drop_expression }}          - multiline:              firstline: ""              max_wait_time: 3s              static_configs:      static_configs:      - targets:          - localhost        labels:          app: {{ app }}          host: {{ ansible_hostname }}          component: {{ comp }}          __path__: {{ file }}  

Postfix emails error, loops back to myself

Posted: 04 Jan 2022 07:44 AM PST

I'm struggling with Postfix to send out emails from a form on my website:

Google Domain, hosting provided by DigitalOcean with a LAMP droplet; this is my DNS config:

dns config

Mail function on my .php file wants to send an email from: info@mydomain.io to: myname@mydomain.io

all goes succesfully except emails are not actually sent.

/var/log/mail.log mentions that status=bounced (mail for mydomain.io loops back to myself)

Here's my main.cf config:

smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu)  biff = no    # appending .domain is the MUA's job.  append_dot_mydomain = no    # Uncomment the next line to generate "delayed mail" warnings  #delay_warning_time = 4h    readme_directory = no    # See http://www.postfix.org/COMPATIBILITY_README.html -- default to 2 on  # fresh installs.  compatibility_level = 2        # TLS parameters  smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem  smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key  smtpd_tls_security_level=may    smtp_tls_CApath=/etc/ssl/certs  smtp_tls_security_level=may  smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache      smtpd_relay_restrictions = permit_mynetworks permit_sasl_authenticated defer_unauth_destination  myhostname = packer-611a9b0e-18c5-2e19-5583-bed9efc126b7  alias_maps = hash:/etc/aliases  alias_database = hash:/etc/aliases  mydestination = localhost  relayhost =  mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128  mailbox_size_limit = 0  recipient_delimiter = +  inet_interfaces = all  inet_protocols = all  myorigin = /etc/mailname  

I looked every guide available to tweak /etc/postfix/main.cf but nothing works, thanks in advance to anybody who's willing to help.

How to identify cloud server to remove online document? [closed]

Posted: 04 Jan 2022 07:15 AM PST

I'm currently dealing with a copyright issue. Someone is hosting an article that I created at this website: 'https://content.app-sources.com/s/43300185708660311/uploads/Reports_/Short_Communication_-_Poaching_in_the_COVID-19_world-6937706.pdf' .

We are trying to identify the source and only managed to figure out it is some kind of cloud hosting service. However, we are unable to identify what cloud service it is. Ideally we also would like to find out the associated account so that we can contact that person.

I hope this is the right forum to post these kind of questions. If not, I would appreciate it if you could point me towards the right forum on StackExchange.

Cannot clone a bitbucket.org project to Linux Ubuntu via WSL in a Windows10

Posted: 04 Jan 2022 07:08 AM PST

So I try to clone a project via Visual Studio Code using a Linux Machine connected with WSL in my Windows System. I get the error "fatal: unable to access '.....git/': Failed to connect to bitbucket.org port 443: Connection timed out

I can do a ping to bitbucket.org.

If I open Visual Studio code using only windows, I can clone the project.

What could it be?

Azure Flow Logs not logging all traffic

Posted: 04 Jan 2022 07:11 AM PST

I managed to setup NSG Flow Logs in Azure for one of my NSG's using the MS documentation: https://docs.microsoft.com/en-us/azure/network-watcher/network-watcher-nsg-flow-logging-overview

I can download the JSON files from the storage account and inspect them. I also can use the PowerBI dashboard and view the information generated from the flow logs. I used the modified PowerBI dashboard from Sameeraman:

https://sameeraman.wordpress.com/2018/11/15/azure-network-troubleshooting-using-nsg-flow-logs-and-powerbi-part-2/

Now I want to modify the PowerBI dashboard in such a way that I can view the amount of bytes sent and received from or to hosts. I am able to do this, however, when setting it up, I noticed that not all traffic seems to be logged.

I used 2 methods to generate traffic from my local machine to the azure VM through the internet as test. I copied several large files via RDP and have created a linked server on my local SQL Server instance to the SQL Server instance on the Azure VM and inserted a bunch of data into tables on a test database there. Now, I don't see the traffic in the NSG logs as I would expect. I would expect that there were a lot of entries or at least 1 entry that states a lot of bytes have been transferred, but none of that. I only see a single entry in the NSG log, but without any bytes sent.

As an example: "1641291993,x.x.x.x,10.0.2.4,54955,1433,T,I,A,B,,,,"

The above is flow state 'Begin' and there is no 'C' for 'Continue' or 'E' for 'End' to be found in any log following up this one. So I was thinking that the session might still be open and then it probably would log one entry again with the 'End' flow state, mentioning the amount of bytes sent in total for that session (since the bytes sent are accumalative, refer to docs), when I closed my SQL Server Management Studio for example. This did not seem to work. There were no subsequent log entries from the particular source IP. Nothing at all.

So to summarize, i created an NSG Flow Log for a particular NSG that is applied to the subnet of a specific VM. I then generated network traffic by copying large files to the VM and inserted data via SQL Server in a table in a database on that VM from my local workstation. Then I looked at the NSG flow log entries, but found only 1 entry for every action (e.g. the sql inserts), even when i closed my session (e.g. SSMS) to the VM.

To be sure, I also created a separate rule in the NSG for in and outbound to allow traffic to and from this Azure VM on that particular port. This way the packets I send from my local machine should be matched to and logged under that rule.

So Iam wondering if Iam doing something wrong here or does the logging work different from my expectations?

mdadm mdadm: cannot open Device or resource busy

Posted: 04 Jan 2022 06:42 AM PST

i have 2 Drives on my Ubuntu Server:

/dev/nvme0n1

/dev/nvme1n1

and i want to create a RAID 0 but when i run this: sudo mdadm --create --verbose /dev/md0 --level=0 --raid-devices=2 /dev/xda /dev/xdb

i get this output: mdadm: super1.x cannot open /dev/nvme0n1: Device or resource busy

mdadm: ddf: Cannot use /dev/nvme0n1: Device or resource busy

mdadm: Cannot use /dev/nvme0n1: It is busy

mdadm: cannot open /dev/nvme0n1: Device or resource busy

I think nvme0n1 is the same one where Ubuntu is also installed on and thats why it doesnt work but i dont have that much experience with Linux

How to reserve a processor in Windows Server for Remote Desktop

Posted: 04 Jan 2022 06:29 AM PST

Windows Server 2012 R2, every once in a while a runaway process will take up all the CPU and my RDP session will either take forever to start or disconnect after a long wait. Is there a way to reserve a CPU so that I never have an issue Remoting into the server?

How to issue SSL certificate with Nginx docker container using FreeIPA?

Posted: 04 Jan 2022 06:10 AM PST

Instead of using a self-signed SSL certificate (untrusted) I want to issue certificate from a trusted source in this case from a FreeIPA instance (I'm new to FreeIPA).

How can this be done?

Is Kerberos required for this?

No User exists for 'query user'

Posted: 04 Jan 2022 06:03 AM PST

I have one weird issue on the virtual machine. When I do 'query user user1' is not showing the active session of the user. It says 'No User exists for user1', though there is active connection. However it works fine for other users. Why is this happening? Please help. Thanks.

C:\Users\user2>query user   USERNAME         SESSIONNAME        ID  STATE   IDLE TIME  LOGON TIME   user2            rdp-tcp#2           2  Active          .  4-1-2022 12:12  >user1            rdp-tcp#57          3  Active          .  4-1-2022 13:35    C:\Users\user2>query user user1  No User exists for user1    C:\Users\user2>query user user2   USERNAME              SESSIONNAME        ID  STATE   IDLE TIME  LOGON TIME   user2         rdp-tcp#57          3  Active          .  4-1-2022 13:35  

send email from another server than FROM domain without being marked as spam

Posted: 04 Jan 2022 07:26 AM PST

I want to send a mail from a website. The mail server from this domain is not publicly reachable, so I can't use that to send the mail.

The webserver that hosts the website has another email server that I can/have to use. But I want the FROM to be the website domain.

How can I set this up without my mails being marked as spam for claiming to be from the website domain when the mail server that sent it is not.

Not sure if I am clear. Maybe an example helps:

domain: a.com  domain mail server: a.com (e.g. mail@a.com - can't use that mail server for sending)  webserver mail server: mail.customer123.somehoster.com (can only use that server)  

So I want to send a mail via the mail.customer123.somehoster.com mail server but the sender should appear to be mail@a.com. I understand that this looks like spam mail to most servers. What's the correct way to set this up?

So far I've read that a SPF record on the DNS is all I need. Is that correct? Is that the best practice? Can some help me with the correct entry?

check what processes connecting to external port

Posted: 04 Jan 2022 05:57 AM PST

I am running an email server for a school associaltion and we offer email forwarding service for graduated students, offering them an email alias in our domain name, like johndoe@someschoolgrad.com, and we forward the email to their designated personal address registered with us.

We have recently upgraded from a very old email server on which newer TLS ver is no longer supported, and moved to a ubuntu20 postfix + spamassassin + perl spf check config. After setup we found that the IP does have bad reputation for sending spam email. I checked again the postfix main.cf and the postfix should not be working as open relay.

smtpd_sender_restrictions =      permit_mynetworks,      permit_sasl_authenticated,      reject_non_fqdn_sender,      reject_unknown_sender_domain  smtpd_relay_restrictions =      permit_mynetworks,      permit_sasl_authenticated,      defer_unauth_destination  

the email volume lookup was a bit worrying as some website seems to record my IP sending 1 out of 30 million email in the world on some days

https://talosintelligence.com/reputation_center/lookup

email history email reputation

of course I don't think they have bugged my server to check me so I don't know where their data come from

I am thinking of checking if there is any other program which may be sending email on my server

I have setup ufw to allow destination port 25 out with loggin

#sudo ufw status  To  Action      From  --  ------      ----  25  ALLOW OUT   Anywhere  (log)  

I am seeing around 6000 out entries in the past 60 hours in ufw.log by grep "DPT=25 ", which looks reasonable to me given we have members at order of 1000.

also checked mail.log, the count of lines of for delivery (250 ok, 550*, 454*) adds to roughly 3000 lines.

And also I have seen many times postfix try to delivery some non-delivery notice but the connection is either timeout or rejected. I have since increased min and max backoff time, and decreased queue lifetime to try reduce retry volume of some spam email we receive at the aliases.

I also receive bounce from for example gmail and some other smtp server

status=bounced (host gmail-smtp-in.l.google.com[74.125.130.26] said: 550-5.7.26 This message does not have authentication information or fails to 550-5.7.26 pass authentication checks. To best protect our users, the message has been blocked  status=bounced (host gmail-smtp-in.l.google.com[74.125.130.26] said: 550-5.7.1 [MY IP] Our system has detected that this message is 550-5.7.1 likely unsolicited mail. To reduce the amount of spam sent to Gmail, this message has been blocked  status=bounced (host gmail-smtp-in.l.google.com[74.125.130.26] said: 550-5.7.1 [MY IP] Our system has detected that this message is 550-5.7.1 likely suspicious due to the very low reputation of the sending IP 550-5.7.1 address. To best protect our users from spam, the message has been 550-5.7.1 blocked.  status=deferred (host imsmx1.netvigator.com[219.76.94.45] refused to talk to me: 554-wironin01.netvigator.com 554 Rejected: Spam email from server IP <MY IP> is blocked by Talos Please go to "https://www.talosintelligence.com/reputation_center/lookup?search=MY IP"  status=deferred (connect to mail.feed-silver.cam[89.144.62.60]:25: Connection refused)  (sender non-delivery notification) status=bounced (host aspmx.l.google.com[142.251.12.26] said: 550-5.1.1 The email account that you tried to reach does not exist. Please try 550-5.1.1 double-checking the recipient's email address for typos or 550-5.1.1 unnecessary spaces.  
  1. should I be worried any other processes are sending email on my server trashing my email reputation? thats why I wish I was able to check from ufw log what processes tried to make connection to external 25 port
  2. are email reputation site data reliable? I mean, I am not sure if email volume 2+ is anything worrying, but netvigator being an ISP checks it gives it reasonable level of crediability.
  3. for our association providing email forwarding service. Should we outright drop emails of high spam score or simply use the default practice of spamassassin to add [SPAM] to subject and let final receiver decide the handling? reference: https://support.google.com/a/answer/175365?hl=en
  4. does us forwarding spam email trash reputation of our sending IP?
  5. should we relay sender non-delivery notification back to sender? Although sometimes I read in mail log it seems to fail immediately, suspect they are forged header email.
  6. is there any IP equilvalent of SPF to domain name? or is it entirely impossible due to email relaying.
  7. does setting up dkim help reputation of my IP? we do have a small volume of email that is send out via our own domain.

Change default docker registry in Openshift 4.7

Posted: 04 Jan 2022 05:25 AM PST

How can I change default docker image registry in Openshift ? I already modified /etc/containers/registries.conf in workers and master nodes and put something like this but it didn't work.

[[registry]]  prefix = "my_private_registry.com"  location = "my_private_registry.com"  insecure = false  

How can I change the default repo? Thank you

Setup Centos 8 with 2 IP addresses on seperate subnets

Posted: 04 Jan 2022 05:02 AM PST

Im trying to setup a virtual machine with 2 different ip's example below

IP1 : 10.17.252.0 IP2 : 10.16.51.0

Gateways = .254 for both subnets

The vm needs to be able to communicate with both subnets throughout the gateways configured.

PHP unable to identify sqlsrv_connect() function while trying to connect SQL Server

Posted: 04 Jan 2022 06:43 AM PST

I'm trying to connect to my local MSSQL Server from a simple PHP file using the sqlsrv_connect() function, but every time I'm calling the file in the browser through localhost, it's throwing a 500 (Internal Server Error) saying: "PHP Fatal error: Uncaught Error: Call to undefined function sqlsrv_connect() in C:\inetpub\wwwroot\AJAX_Tutorial\get_db_data.php:4". get_db_data.php is the file from which I'm trying to connect the server. Seems like PHP or the localhost can't identify the sqlsrv_connect() function. But as far I'm concerned, I did all the needful to make sure PHP connects the SQL Server.

My environment: Windows 10 Pro, Version 21H2, 64-bit.

What I have done:

  1. Enabled IIS ensuring CGI/Fast CGI is working.
  2. Installed PHP 8.1.1 non-thread safe x64 version at C:\Program Files\PHP-8.1.1
  3. In C:\Program Files\PHP-8.1.1, renamed php.ini-development file to php.ini
  4. In the php.ini file, uncommented the extension_dir = "ext" directive.
  5. Downloaded php_wincache.dll and added it to the default ext directory of PHP.
  6. Added the line extension=php_wincache.dll at the Dynamic Extensions section of the php.ini file.
  7. Installed PHPManagerForIIS_V1.5.0 and configured IIS accordingly so that PHP can be hosted through IIS. Also enabled the php_wincache.dll extension here.
  8. Installed MSSQL Server 2019 Developer Edition along with the respective Management Studio.
  9. Created the respective database and tables in SQL Server that I want to connect to from PHP.
  10. Ensured Microsoft ODBC Driver 17 for SQL Server is installed in my PC, that is required by PHP.
  11. Ensured Microsoft SQL Server 2012 Native Client is installed in my PC, that is required by PHP.
  12. Downloaded Microsoft Drivers for PHP for SQL Server 5.9 and extracted its contents. Copied the file named php_sqlsrv_80_nts_x64.dll in the package and pasted it in the default ext directory of PHP.
  13. Added the line extension=php_sqlsrv_80_nts_x64.dll at the Dynamic Extensions section of the php.ini file.
  14. In IIS Manager, through PHP manager, enabled the php_sqlsrv_80_nts_x64.dll extension.
  15. Created a phpinfo.php file in the root of the IIS, which ran successfully but found no mention of wincache and sqlsrvin it.

After the steps above, I ran the actual PHP file trying to connect the SQL Server, but it's throwing an error saying it can't identify the sqlsrv_connect() function. Assuming the php_sqlsrv_80_nts_x64.dll not being loaded while PHP is starting, I ran php --ini in the command prompt. That's when the following messages are being thrown:

PHP Warning: PHP Startup: Unable to load dynamic library 'php_wincache.dll' (tried: ext\php_wincache.dll (The specified module could not be found), ext\php_php_wincache.dll.dll (The specified module could not be found)) in Unknown on line 0

Warning: PHP Startup: sqlsrv: Unable to initialize module Module compiled with module API=20200930 PHP compiled with module API=20210902 These options need to match in Unknown on line 0 Configuration File (php.ini) Path: Loaded Configuration File: C:\Program Files\PHP-8.1.1\php.ini Scan for additional .ini files in: (none) Additional .ini files parsed: (none)

However, PHP seems to be running fine, because when I used jQuerey AJAX get() and post() method from an HTML file to fetch data from another PHP file, I was successfull in doing so. No exception was thrown then.

So what am I missing now that neither php_wincache.dll and sqlsrv seem to load during PHP startup, nor can I connect the SQL Server from the PHP file? As I'm new in jQuery AJAX and PHP, I'm not much aware of the intricacies of them and hence, stuck with the issue for the past four days. I've used every resource in my hand, but nothing is working. Please help. I can't get ahead with my tasks because of this.

Thanks and Regards!

get_db_data.php code:

<?php      $serverName = "(local)";    // Optionally use port number (1433 by default).      $connectionString = array("Database"=>"TestDB");    // Connection string.      $conn = sqlsrv_connect($serverName, $connectionString); // Connect using Windows Authentication.        if($conn === false) {          echo "Connection could not be established.<br/>";          die(print_r(sqlsrv_errors(), true));      } else {          echo "Connection established successfuly.<br/>";      }        sqlsrv_close($conn);    // Close connection resources.  ?>  

Using CodeGear/Embarcardero scktsrvr.exe on Linux

Posted: 04 Jan 2022 05:48 AM PST

Happy New Year to y'all!

We're trying to adapt (not port) an application that was developped using WinDev so it can be run in Linux, for various reasons.

Thanks to Wine, the application installs alright, but we're at a standstill because CodeGear/Embarcadero's scktsrvr.exe won't start.

Is there a way to make it work in Linux (Ubuntu or a derivative)? It installs as part of the software package we're hoping to make work on Linux and is used to connect the application to an Advantage Database Server, which installs just fine too.

Without this socket or a substitute, we're stuck. Any help would be greatly appreciated.

Installing Oracle Java in Red Hat Enterprise Linux

Posted: 04 Jan 2022 07:22 AM PST

I am seeking advice on a solution for updating Oracle Java in a large number of hosts -

We have a large number of RHEL hosts, and we would like to use yum update rather than rpm install for upgrading Oracle java, as yum update would install the latest version of the java on those hosts, and we can easily automate the version upgrade through using yum update playbooks.

However, my understanding is that yum is a non-standard means of installing Oracle's Java runtimes. Also in order to use yum install, we must have a repository, but Oracle's repository is only available for Oracle Linux rather than RHEL.

Is there any solution where we can use yum update in RHEL with Redhat repository which contains the Oracle Java packages? As I am new to this, any advice would be greatly appreciated.

Website responds very slow using remote database

Posted: 04 Jan 2022 07:52 AM PST

I want two same websites to share one database. One server is in Asia, hosting a website and the database. Another server is in the US, hosting the same web via remote database. However, the web in the US responds very slow but when moving the database to the local server(US server), the web responds fast. How to speed up the connection between the server in the US and the database in Asia?

I am using Centos7+Nginx+MySQL.

Directory traversal fix for nginx config

Posted: 04 Jan 2022 04:44 AM PST

I discovered that my website has this issue and I wasn't able to fix this. I tried several things like to checking if parent prefixed locations for Nginx alias directives end with a directory separator , but no luck so far. Merge_slashes on - is the default setting. I've read about AppArmour or SELinux. Is that the way to go? I have Ubuntu 18. In other words, I'm able to download this file http://example.com///etc/passwd and I want to avoid this. Any help is appreciate. Here is my config:

         server {    listen 80;    server_name      .example.com;     return 301 https://example.com$request_uri;  }    server {    server_name    www.example.com;      listen 443 ssl http2;      ssl_prefer_server_ciphers On;      ssl_session_cache shared:SSL:10m;      ssl_protocols TLSv1.2 TLSv1.3;      ssl_dhparam /etc/ssl/certs/dhparam.pem;      ssl_ciphers '......      ssl_certificate          /...crt;      ssl_certificate_key      /..key;        return 301 https://example.com$request_uri;  }  server {    server_name    example.com;      listen 443 ssl http2;      ssl_prefer_server_ciphers On;      ssl_session_cache shared:SSL:10m;      ssl_protocols TLSv1.2 TLSv1.3;      ssl_dhparam /etc/ssl/certs/dhparam.pem;      ssl_ciphers '...      ssl_certificate          /...crt;      ssl_certificate_key      /.....key;        add_header x-frame-options "SAMEORIGIN" always;      add_header x-xss-protection "1; mode=block" always;      add_header X-Content-Type-Options nosniff;      add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; $        root /var/www/www.example.com;      index index.php;      client_max_body_size 10M;    access_log /var/log/nginx/example.com.log;    error_log /var/log/nginx/example.com.error.log error;    location / {      try_files $uri $uri/ /index.php;  }    location /shopping/ {          index index.php index.html index.htm;          rewrite ^/shop/wp-json/(.*?)$ /shopping/index.php?rest_route=/$1 last;          try_files $uri $uri/ /shop/index.php?q=$uri&$args;  }  location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {          expires 24h;          log_not_found off;  }        location ~ \.php$ {      try_files $uri =404;      fastcgi_split_path_info ^(.+\.php)(/.+)$;      fastcgi_pass unix:/var/run/php-fpm.sock;      fastcgi_index index.php;      fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;      include fastcgi_params;     }  location ~\.(log|save|htaccess|json|csv|txt|xls)$ {       deny all;       error_page 403 =404 / ;   }      location ~* /(?:uploads|files)/.*\.php$ {          deny all;  }  

port mapping didn't happen for a container deployed on AWS ECS(uses EC2)

Posted: 04 Jan 2022 04:56 AM PST

Context:

I am using Circle CI's aws-ecs/deploy-service-update orb to deploy my docker container by pulling the latest image in AWS ECR and deploy it in AWS ECS with AWS EC2 instance. This container is a Machine Learning model that accepts API requests at TCP port 3000(I am using fastAPI for this) and returns the predictions. After I deployed it I couldn't send requests to the public IP of the container instance of the task that deploys the container at port 3000 (This IP is not my EC2 instance's public IP; it only has private IP and public IP is disable).

Debugging

  1. I checked my security group and made sure that the port 3000 is open to receive requests from all IPs(0.0.0.0), as part of the inbound rule.
  2. I stopped the task(which automatically will stop the container running in the EC2 instance) with the thought that something may have gone wrong from Circle CI. Then, according to the service configuration(1 desired task) and task definition of AWS ECS, a new task has started(hence the container) automatically. But, I couldn't send requests to this either.
  3. I SSHed into my EC2 instance to know if the port 3000 is open. This is when is when I learned that ports weren't mapped at all: enter image description here
    As you can see, PORTS column is empty for the container and the container has to accept requests at port 3000 from the command.

And here are the open ports of the EC2 instance: enter image description here As you can see, port 3000 is not listed here.


Here is the task with port mappings which deployed the container (to AWS ECS) that you see docker ps screenshot above: enter image description here
In the task definition, you can see the port mappings I have defined for the container.


Here is the task running on my EC2 instance with the task-definition shown above and the network mode I am using is 'awsvpc': enter image description here


Here's the "Networking" tab of ENI associated with the task, and also the inbound rule of the security group associated with the EC2 instance that the task is running inside, which accepts requests on port 3000 from all IPs. enter image description here

EDIT 1:

After I did

docker run -p 3000:3000 <my-image:my-tag>  

inside the EC2 machine(by SSHing from my laptop), I could send API requests and receive proper response to the container to it's public IP, of the cluster of AWS ECS. This means that ports are being mapped only when I run the container manually.

I had no problems with ports when I used FARGATE, when I updated the service from Circle CI or even when I manually started tasks.

So, how to automatically map ports when a task is run from AWS ECS service dashboard or from Circle CI? If I run docker container manually, I will not be able to get logs automatically from AWS Cloudwatch and will not be able to stop it from AWS ECS dashboard. Another container by AWS that is running in EC2 instance will take care of those things. It will route the logs to Cloudwatch and accepts stop the existing one and start commands to start a new container with new image stored in AWS ECR, without having to SSH everytime I would want to look at logs or start/stop containers.

What has gone wrong here, which led to ports not being mapped and How do I fix it and map ports properly, so i will be able to send API requests to my container.

puppetdb 6.3.3 can't connect to postgresql-11

Posted: 04 Jan 2022 07:07 AM PST

semanage confirms my host is running in permissive mode.

I can login to postgresql as user puppetdb when I don't use a password like this:

[msk@puppet ~]$ su - postgres Password: Last login: Fri Jun 21 14:19:01 EDT 2019 on pts/1 bash-4.2$ psql -d puppetdb -U puppetdb psql: FATAL: Peer authentication failed for user "puppetdb"

netstat -tlpn |grep postmaster shows
tcp 0 0 127.0.0.1:5432 0.0.0.0:* LISTEN 22948/postmaster

The error I see in /var/log/puppetlabs/puppetdb/puppetdb.log by the hundred is:

Pool - Connection is not available, request timed out after 3012ms.  2019-06-21T13:36:50.267-04:00 ERROR [p.p.c.services] Will retry database connection after temporary failure: java.sql.SQLTransientConnectionException: PDBMigrationsPool - Connection is not available, request timed out after 3000ms.  

/var/lib/pgsql/11/data/pg_hba.conf contains:

local   all             all                                     peer  host    all             all             127.0.0.1/32            ident  host    puppetdb        puppetdb        127.0.0.1/32            peer  

postgresql-Fri.log is full of

FATAL: remaining connection slots are reserved for non-replication superuser connections

Thanks for any clues.

Is there any difference between Domain controller and Active directory?

Posted: 04 Jan 2022 05:27 AM PST

If I want to define domain controller then i would say DC is where active directory installed or

Acitve Directory simply means: Secure centralized authentication and management and domain controller = ADDS + DNS.

But I get confused when i read here that

I also think it is VERY EASY to say DOMAIN CONTROLLER == ACTIVE DIRECTORY, which isn't quite the case.

I want to know is it correct or wrong? If wrong then what is the difference?

nginx check if filename with different extension exists

Posted: 04 Jan 2022 07:07 AM PST

If a file with ".html" extension doesn't exist I need to know if the same file exists with ".th.html" extension and make a redirect.

Right now on 404 I'm doing a rewrite and if $request_filename exists I do the redirect.

    try_files $uri $uri/ @thengine;        error_page 404 = @thengine;        location @thengine {              rewrite ^/(.*)\.(htm|html)$ /$1.th.html;                if (-f $request_filename) {                      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;                      proxy_set_header Host $http_host;                      proxy_intercept_errors on;                      proxy_redirect off;                        proxy_pass http://thengine_backend;              }      }  

I'm wondering if there is a better way to do that without rewrite.

Maybe something like

if ($request_filename ~ (some rule to replace extension)){...}  

Thank you.

Edit: All requests from browser will come with .html, but in case the file with .html doesn't exist, I have to check if the same file exists with .th.html and do redirect only on this case.

Edit2: Let's say someone access domain-nginx.com/path/to/index.html

  • nginx must check if file exist, and if it does, show the page
  • if file doesn't exist, look for index.th.html
  • if index.th.html doesn't exist give directly 404
  • if index.th.html DOES exist set some headers and serve domain-app.com/path/to/index.th.html (here is an application that will process these kind of templates)

All this time the user must see only domain-nginx.com/path/to/index.html and not see any redirect or the url to change.

Notice that .th.html is handled by another application

web server behind NAT cannot be accessed by the same network using NAT router IP or domain

Posted: 04 Jan 2022 05:07 AM PST

I have 1 host server as NAT server, it has public domain name example.com tied to its public IP address PUB_IP_ADD.

I have another web server behind NAT with IP address 192.168.1.100 and port forwarding rules is done on the host server:

-A PREROUTING -i eth0 -p tcp -m tcp --dport 80 -j DNAT --to-destination 192.168.1.100:80

I have some other servers behind NAT with fixed ip address range 192.168.1.101-110 and the masquerade rules are done for the whole 192.168.1.0/24 range:

-A POSTROUTING -s 192.168.1.0/24 -o vmbr0 -j MASQUERADE

the above rules can let my servers behind NAT access internet. (download and ping public ips).

My web page can be accessed from the internet by visiting example.com but cannot be accessed from inside the NAT network in those 192.168.1.0/24 by using the same domain name or host server ip address.

I wonder, why the web server behind the NAT firewall cannot be accessed by its peers by using NAT server domain name or IP?

Do I need to add SNAT rules specifically to the web server and remove the masquerade line?

monit send email does not work

Posted: 04 Jan 2022 04:49 AM PST

I am trying to use monit, and set up email server using gmail. The configuration file is like this:

set mailserver smtp.gmail.com port 587  username "someuser@gmail.com" password "password"  using tlsv1  with timeout 30 seconds  

And I set an alert to test:

check file alerttest with path /.nonexistent  alert address@gmail.com with reminder on 500 cycles  

But when I use monit validate, the error message I got is this:

Sendmail: error receiving data from the mailserver 'smtp.gmail.com' -- Resource temporarily unavailable  Alert handler failed, retry scheduled for next cycle  'alerttest' file doesn't exist  Sendmail: error receiving data from the mailserver 'smtp.gmail.com' -- Resource temporarily unavailable  'alerttest' trying to restart  

Anyone has any ideas? Thanks a lot

IIS 7 and ASP.NET State Service Configuration

Posted: 04 Jan 2022 05:07 AM PST

We have 2 web servers load balanced and we wanted to get away from sticky sessions for obvious reasons. Our attempted approach is to use the ASP.NET State service on one of the boxes to store the session state for both. I realize that it's best to have a server dedicated to storing sessions but we don't have the resources for that.

I've followed these instructions to no avail. The session still isn't being shared between the two servers.

I'm not receiving any errors. I have the same machine key for both servers, and I've set the application ID to a unique value that matches between the two servers. Any suggestions on how I can troubleshoot this issue?

Update:

I turned on the session state service on my local machine and pointed both servers to the ip address on my local machine and it worked as expected. The session was shared between both servers. This leads me to believe that the problem might be that I'm not using a standalone server as my state service. Perhaps the problem is because I am using the ip address 127.0.0.1 on one server and then using a different ip address on the other server. Unfortunately when I try to use the network ip address as opposed to localhost the connection doesn't seem to work from the host server. Any insight on whether my suspicions are correct would be appreciated.

No comments:

Post a Comment