Saturday, September 18, 2021

Recent Questions - Server Fault

Recent Questions - Server Fault


Nginx does not open specified listen port

Posted: 18 Sep 2021 08:35 PM PDT

I am trying to proxy SSH traffic from nginx (listening on port 7999) to a bitbucket server (listening on port 7998) on the back-end. Both nginx and bitbucket are running inside Docker containers. If I login to the nginx container and do telnet bitbucket.domain-name.com 7998 it connects. On the host machine if I do netstat -pnlt I get:

Active Internet connections (only servers)  Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name      tcp6       0      0 :::2377                 :::*                    LISTEN      24477/dockerd         tcp6       0      0 :::7946                 :::*                    LISTEN      24477/dockerd         tcp6       0      0 :::80                   :::*                    LISTEN      24477/dockerd         tcp6       0      0 :::443                  :::*                    LISTEN      24477/dockerd         tcp6       0      0 :::7999                 :::*                    LISTEN      24477/dockerd     

But, I when I do this on my computer: git clone ssh://git@domain-name.com:7999/project_key/repo_name.git I get

Cloning into 'repo_name'...  ssh: connect to host domain-name.com port 7999: Connection refused  fatal: Could not read from remote repository.  

And when I do telnet domain-name.com 7999 I get telnet: Unable to connect to remote host: Connection refused.

It seems the problem is nginx is not listening on port 7999 inside the docker container. But, on the host I can see dockerd is listening on port 7999. I imagine I might not have the nginx config correct, but am not sure. Here are the relevant bits from the config files.

docker-compose.yaml (nginx)

services:      nginx:          ports:              - "80:8080"              - "443:8443"              - "7999:7997"  

nginx.conf (inside the nginx container)

stream {      server {          listen 7997;          proxy_pass bitbucket1.cybertron.ninja:7998;      }  }  

And here's some output executed inside the nginx container:

root@6d123f454eef:/# netstat -pnlt  Active Internet connections (only servers)  Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name                                                                          tcp        0      0 0.0.0.0:8080            0.0.0.0:*               LISTEN      10/nginx: master pr                                                                       tcp        0      0 0.0.0.0:8443            0.0.0.0:*               LISTEN      10/nginx: master pr                                                                       tcp        0      0 127.0.0.11:44703        0.0.0.0:*               LISTEN      -              

Any ideas how to fix this issue? I'm stumped.

How to reverse NS?

Posted: 18 Sep 2021 07:15 PM PDT

I want to learn how can we reverse NS? I'm not talking about reverse DNS. Is there a Linux command that get domains connected to a NS?

Let's say that we have ns1.example.com name server. I want to list domains that uses ns1.example.com.

How do you find where the rsync process is coming from?

Posted: 18 Sep 2021 06:29 PM PDT

How do you find where the rsync process is coming from? We have a rsync process, but I am not sure what's the thing that initiate it. I looked at our cronjob running from the Wordpress project, looked at the code, and I don't really see anything that might be running it, but I know it's there, because I think I ran iotop and saw it. What are some helpful commands that would allow me to find it?

Disable nginx cache for a specific URL in site

Posted: 18 Sep 2021 05:01 PM PDT

We want to disable cache on a specific URL in our site.

The problem we have is that when a user buys something, this purchase only is reflected in user's profile when nginx cache is cleared

User's profile URL looks like this: https://example.com/api/user/content/49642

how to detect my jenkinsfile?

Posted: 18 Sep 2021 04:40 PM PDT

  1. I'm new to Jenkins
  2. I have a git repository in my windows 10 pc: C:\Users\my_name\my_product\jenkinsfiles
  3. I created a Jenkinsfile inside it named test (without an extention), its content as follow:
pipeline {      agent any        stages {          stage('Build') {              steps {                  echo 'Building..'              }          }          stage('Test') {              steps {                  echo 'Testing..'              }          }          stage('Deploy') {              steps {                  echo 'Deploying....'              }          }      }  }  
  1. I have a Docker Desktop that runs a Jenkins container, I shared the above path with this container in Jenkins's docker-compose file:
volumes:  - ~/my_product/jenkinsfiles:/var/jenkins_home/jenkinsfiles  
  1. I didn't to mess with Github setting and stuff so, I went to my Jenkins webpage in localhost:8080 and create a new job of type: pipeline. Then, I selected Pipeline script from SCM; SCM: None; and set Script path: /Jenkinsfiles/test
  2. Running this pipeline produce an error:
Started by user admin  Lightweight checkout support not available, falling back to full checkout.  Checking out hudson.scm.NullSCM into /var/jenkins_home/workspace/product/product_local/my_lab@script to read /Jenkinsfiles/test  java.io.IOException: /Jenkinsfiles/test is not inside /var/jenkins_home/workspace/product/product_local/my_lab@script      at org.jenkinsci.plugins.workflow.cps.CpsScmFlowDefinition.create(CpsScmFlowDefinition.java:178)      at org.jenkinsci.plugins.workflow.cps.CpsScmFlowDefinition.create(CpsScmFlowDefinition.java:68)      at org.jenkinsci.plugins.workflow.job.WorkflowRun.run(WorkflowRun.java:309)      at hudson.model.ResourceController.execute(ResourceController.java:100)      at hudson.model.Executor.run(Executor.java:433)  Finished: FAILURE  

How do I run this Jenkins file successfully, connecting the file to a new pipeline?

Home Email Server Configuration

Posted: 18 Sep 2021 03:48 PM PDT

A Nuc Computer with Ubuntu Operating System and a Control Panel host Websites and run computer programs and surf the Internet all from one computer?

EC2 instance with multiple EIP

Posted: 18 Sep 2021 03:21 PM PDT

I have an EC2 host and I wish to have 2 public IP's available to access the host. One IP for HTTPS (443) access, and the other IP for SSH (22) access. Assume that I must have 2 public IP's for the sake of this question.

I found this related SF question which is now out of date. I have already created 2 ENI's, and assigned a public IP to each. My security groups allow HTTPS and SSH access as described above. While I can access SSH (on the first ENI), I cannot reach the HTTPS server on the second.

I'm suspecting a routing issue - since both EIP's can reach the internet. (and my SSH and HTTPS connections can come from any public address)...if either EIP is the default route I won't be able to reach the other. What is the solution here? Would attaching two EIP's to a single ENI solve the problem (if that is even possible)?

postfix MTA and SSL

Posted: 18 Sep 2021 03:53 PM PDT

We have some services sending information to email receivers. The services use SMTP to send the mail to postfix and then postfix delivers it to the correct domains (gmail.com, hotmail.com etc).

When the mail arrive in an gmail inbox its marked with this icon enter image description here

Im trying to understand how the encryption for email works. If we add an certificate to postfix, will it create an end to end encryption, so if our service send an email to @gmail.com - what will happen?

  1. The message will be encrypted from our service and all the way to gmail.com
  2. Or will the email be encrypted between our service and postfix, decrypted (on postfix) and then encrypted between postfix and gmail if gmail.com offer it?

If its option 2, what would be the benefit in this case to use TLS between the services and postfix?

Since its only our internal services sending email, there is no passwords etc sent i clear text between our services and postfix.

Can a wild card web site certificate be used for postfix (same domain name as the postfix configuration)

Ansible ec2_tag failing due to missing boto dependency, which is installed

Posted: 18 Sep 2021 02:55 PM PDT

I am trying to set the tag on a resource on my EC2 machine, per below:

- hosts: machinesA    tasks:    - name: Adding tags      ec2_tag:        resource: {{ imageid }}        region: {{ region }}        state: present        tags:          Name: "My image"  

And this fails with an error that botocore and boto3 are required. Based on the IP in the error message it is required on the target machine. However, I confirmed that both the source (control node) and target machine have boto, botocore, and boto3 installed. (My script did that earlier, and I even SSH'd to the target and confirmed they are installed)

Earlier in the script I saw a warning about an available PIP upgrade, but on this OS (CentOS 7) that leads to broken dependencies so I just leave PIP as is. Hopefully that is not the cause.

Is this a known issue, or is there a simple workaround?

Nginx: Running multiple web apps on same server using subdomains

Posted: 18 Sep 2021 01:13 PM PDT

I am having a Ubuntu 20.04.1 LTS and I am running nginx/1.18.0 (Ubuntu).

I am basically have three config files in my folder /etc/nginx/sites-available as I would like to route requests to:

  1. myserver.com
  2. immos.myserver.com
  3. items.myserver.com

My myserver.com config file looks like the following:

server {      server_name myserver.com www.myserver.com;      root /var/www/main-application/public;        add_header X-Frame-Options "SAMEORIGIN";      add_header X-XSS-Protection "1; mode=block";      add_header X-Content-Type-Options "nosniff";        index index.html index.htm index.php;        charset utf-8;        location / {          try_files $uri $uri/ /index.php?$query_string;      }        location = /favicon.ico { access_log off; log_not_found off; }      location = /robots.txt  { access_log off; log_not_found off; }        error_page 404 /index.php;        location ~ \.php$ {          fastcgi_pass unix:/var/run/php/php7.4-fpm.sock;          fastcgi_index index.php;          fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;          include fastcgi_params;      }        location ~ /\.(?!well-known).* {          deny all;      }          listen 443 ssl; # managed by Certbot      ssl_certificate /etc/letsencrypt/live/myserver.com/fullchain.pem; # managed by Certbot      ssl_certificate_key /etc/letsencrypt/live/myserver.com/privkey.pem; # managed by Certbot      include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot      ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot      }  server {      if ($host = www.myserver.com) {          return 301 https://$host$request_uri;      } # managed by Certbot          if ($host = myserver.com) {          return 301 https://$host$request_uri;      } # managed by Certbot          listen 80;      server_name myserver.com www.myserver.com nlg.myserver.com;      return 404; # managed by Certbot          }  

The nginx-config of my immos.myserver.com looks like the following:

server {      listen 80;      server_name immos.myserver.com;      root /var/www/immos-application/public;        add_header X-Frame-Options "SAMEORIGIN";      add_header X-XSS-Protection "1; mode=block";      add_header X-Content-Type-Options "nosniff";        index index.html index.htm index.php;        charset utf-8;        location / {          try_files $uri $uri/ /index.php?$query_string;      }        location = /favicon.ico { access_log off; log_not_found off; }      location = /robots.txt  { access_log off; log_not_found off; }        error_page 404 /index.php;        location ~ \.php$ {          fastcgi_pass unix:/var/run/php/php7.4-fpm.sock;          fastcgi_index index.php;          fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;          include fastcgi_params;      }        location ~ /\.(?!well-known).* {          deny all;      }    }  

My nginx config of items.myserver.com looks like the following:

server {      listen 80;      server_name items.myserver.com;      root /var/www/items_application/public;        add_header X-Frame-Options "SAMEORIGIN";      add_header X-XSS-Protection "1; mode=block";      add_header X-Content-Type-Options "nosniff";        index index.html index.htm index.php;        charset utf-8;        location / {          try_files $uri $uri/ /index.php?$query_string;      }        location = /favicon.ico { access_log off; log_not_found off; }      location = /robots.txt  { access_log off; log_not_found off; }        error_page 404 /index.php;        location ~ \.php$ {          fastcgi_pass unix:/var/run/php/php7.4-fpm.sock;          fastcgi_index index.php;          fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;          include fastcgi_params;      }        location ~ /\.(?!well-known).* {          deny all;      }    }  

All the subdomains and the domain is routed on the DNS to my server's ip.

I can open myserver.com and get routed to the correct page.

BUT when opening immos.myserver.com, items.myserver.com I get routed to the application that is running on myserver.com.

All three applications are laravel applications.

Any suggestions what I am doing wrong?

Is there a difference in performance or capability in these sas cables?

Posted: 18 Sep 2021 12:44 PM PDT

I'm looking at sas sff-8644 to sff-8088 cables and noticed that some are single cable where as others are dual cable. What's the difference between these, and is one better when connecting an hba to an array than the other?

dual cable

single cable

Migrate a QEMU/KVM VM from qemu:///system to qemu:///session

Posted: 18 Sep 2021 01:50 PM PDT

I have a created a Windows 10 VM using virt-manager as user (not root).

However, when try to list the VMs with virsh list --all, My VM is not listed? And, if I specify the system URI with by running virsh -c qemu:///system list --all, I see my VM listed.
I would like to migrate my VM from qemu:///system to qemu:///session to be able to list it with virsh list --all.

  • How can I achieve that?

How to redirect two domains to same local server IP with pfSense

Posted: 18 Sep 2021 01:06 PM PDT

I am planning to setup the firewall before my webserver in cloud that hosts 3 websites. However, all the three websites are proxied by cloudflare. So my question is it possible to map the public IP of PFsense in cloudflare and inturn PFsense will forward the http requests to webserver accordingly to each website:

Cloudflare --> PFsense public IP --> site1.com(connected to pfsense through private IP) 2)Cloudflare --> PFsense public IP --> site2.com(connected to pfsense through private IP) Cloudflare --> PFsense public IP --> site3.com(connected to pfsense through private IP) If it is possible, please provide me the steps to achieve this, thanks in advance.

PFSense domain forwarding

problems mounting curlftpfs

Posted: 18 Sep 2021 03:00 PM PDT

I am attempting to mount an FTPS connection but am not having much success in getting it to automatically mount. I am using AWS Linux. I can get it working from the command line with:

curlftpfs <ipaddress>:/incoming /home/<username>/autohcidev/ -o ssl,no_verify_peer,allow_other,debug  

The credentials are specified in /root/.netrc. That connection seems to work fine:

FUSE library version: 2.9.4  nullpath_ok: 0  nopath: 0  utime_omit_ok: 0  unique: 1, opcode: INIT (26), nodeid: 0, insize: 56, pid: 0  INIT: 7.26  flags=0x001ffffb  max_readahead=0x00020000     INIT: 7.19     flags=0x00000011     max_readahead=0x00020000     max_write=0x00020000     max_background=0     congestion_threshold=0     unique: 1, success, outsize: 40  

so with some confidence I add this into /etc/fstab :

curlftpfs#<ipaddress>:/incoming /home/<username>/autohcidev/ fuse ssl,no_verify_peer,allow_other,uid=512,gid=512,umask=0002 0 0  

and then I enter

mount -a  

and I get:

mount: wrong fs type, bad option, bad superblock on curlftpfs#<ipaddress>:/incoming,     missing codepage or helper program, or other error       In some cases useful info is found in syslog - try     dmesg | tail or so.  

dmesg | tail gives the following:

[    2.281634] input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4  [    2.343044] ACPI: Power Button [PWRF]  [    2.345804] input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5  [    2.411051] ACPI: Sleep Button [SLPF]  [    2.491384] mousedev: PS/2 mouse device common for all mice  [    3.525191] EXT4-fs (xvda1): re-mounted. Opts: (null)  [    3.550044] fuse init (API version 7.26)  [    3.796345] NET: Registered protocol family 10  [    3.803184] Segment Routing with IPv6  [    6.212849] random: crng init done  

The same thing before and after mount -a

That userid and group ID are valid on the local server. I also tried a user ID and group ID that are valid on the remote server. Some googling suggested that I need to install some sort of helper program. I installed cifs-utils as was suggestged at one point, but that felt like a long shot and indeed it did not seem to help.

sudo yum install nfs-common  

returns the following on AWS Linux:

Loaded plugins: priorities, update-motd, upgrade-helper  amzn-main                                                | 2.1 kB     00:00  amzn-updates                                             | 2.5 kB     00:00  No package nfs-common available.  Error: Nothing to do  

So at this point I'm thinking that I need to find something equivalent for AWS linux, but I seem to only be able to find documentation about EFS. Any insight would be appreciated.

No space left on device: Error writing to logs/access_log

Posted: 18 Sep 2021 02:00 PM PDT

Getting the following error:

[Wed Aug 16 00:31:23 2017] [warn] [client 128.250.0.204] (28)No space left on device:   Error writing to logs/access_log, referer: https://...  

The access log file seems to be filling up and not archiving. I have copied the old file and created a new one but the same thing seems to be happening.

-rw-r--r-- 1 root root  38366046 Aug 17 09:19 access_log  -rw-r--r-- 1 root root 145557729 Aug 16 11:25 access_log.1  

Plenty of space on the drive etc (see below)

Any advice much appreciated.

Danny.


$ sudo fdisk -l    WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.    Disk /dev/xvda: 10.7 GB, 10737418240 bytes, 20971520 sectors  Units = sectors of 1 * 512 = 512 bytes  Sector size (logical/physical): 512 bytes / 512 bytes  I/O size (minimum/optimal): 512 bytes / 512 bytes  Disk label type: gpt      #         Start          End    Size  Type            Name   1         4096     20971486     10G  Linux filesyste Linux  128         2048         4095      1M  BIOS boot parti BIOS Boot Partition  

$ df -h    Filesystem      Size  Used Avail Use% Mounted on  /dev/xvda1      9.8G  6.2G  3.5G  65% /  devtmpfs        2.0G   60K  2.0G   1% /dev  tmpfs           2.0G     0  2.0G   0% /dev/shm  /dev/xvdf        40G   11G   27G  29% /mnt/hd0  

$ df -i    Filesystem      Inodes  IUsed   IFree IUse% Mounted on  /dev/xvda1      655360 252250  403110   39% /  devtmpfs        503768    447  503321    1% /dev  tmpfs           506002      1  506001    1% /dev/shm  /dev/xvdf      2621440 156532 2464908    6% /mnt/hd0  

Services running —

httpd (pid  19681) is running...  ip6tables: Firewall is not running.  iptables: Firewall is not running.  irqbalance (pid  2233) is running...  lvmetad (pid  1928) is running...  lvmpolld (pid  1937) is running...  dmeventd is stopped  mdmonitor is stopped  messagebus (pid  2259) is running...  mysqld (pid  2660) is running...  netconsole module not loaded  Configured devices:  lo eth0  Currently active devices:  lo eth0  ntpd (pid  2416) is running...  Process accounting is enabled.  rdisc is stopped  rngd (pid  2242) is running...  rsyslogd (pid  2219) is running...  saslauthd is stopped  sendmail (pid  2709) is running...  sm-client (pid  2718) is running...  openssh-daemon (pid  2908) is running...  

Avoiding unnecessary bounces with OpenSMTPD on OpenBSD

Posted: 18 Sep 2021 08:59 PM PDT

I am running OpenSMTPD on OpenBSD together with spamd, spampd and spamassassin, DKIMproxy and dovecot. My setup is to handle both local e-mail on the server and (external) email for my domain. My setup seems to be working (still in testing phase). I am happy to be able to realise my setup with an opensmtpd.conf file of 17 lines excluding comments and spaces. There are however a few things that I am not happy with. I hope someone can suggest how to address these:

While building the setup I initially had no spampd / spamassessin. In that config there was exactly one 'accept' command picking up the email and delivering to dovecot. The OpenSMTPD server checks existence of the recipient address and if not existing returns error 550 and does not allow submission of the e-mail. This is good.

After I incorporated spampd and spamassassin the 'accept' command picking up the incoming e-mail forwards to spampd (which runs spamassassin). After spampd / spamassasin processing the message is picked up by another OpenSMTP accept command that delivers to dovecot. Though this works there are some unwanted side effects that, if not fixed, would lead to vulnerabilities:

1) spampd / spamassassin will process all incoming messages for my domain, also those for recipients on that domain that do not exist. Spampd/spamassassion are not exactly 'light' tasks. Together this makes the opportunities for a DOS attack higher.

2) All incoming messages for my domain are first accepted. In case of unknown recipients this will only be detected after spampd / spamassassin processing. Once the unknown recipient is detected a delivery status e-mail will be send by the mailer deamon to the sender stating the recipient is unknown. That allows an attacker to use my server to send spam-like email to any valid recipient by sending an e-mail to my server with as sender any valid e-mail address and as recipient any invalid recipient on my domain.

Questions:

  • Is there any way to configure OpenSMTPD such that it rejects unknown recipients immediately (i.e. as part of the initial submission to OpenSMTPD) even when spampd / spamassassin are incorporated?
  • Is there any way in which I can make the server NOT send out reject messages for non-existent recipients

Kind Regards,

ip_conntrack_max not found

Posted: 18 Sep 2021 04:00 PM PDT

I did reconfigure /etc/sysctl.conf

net.ipv4.netfilter.ip_conntrack_max = 65536 net.nf_conntrack_max = 65536

net.netfilter.nf_conntrack_tcp_timeout_established = 600 net.ipv4.netfilter.ip_conntrack_tcp_timeout_established = 600

net.netfilter.nf_conntrack_tcp_timeout_time_wait = 90 net.ipv4.netfilter.ip_conntrack_tcp_timeout_time_wait = 90

After sysctl -p, I have received;

sysctl: cannot stat /proc/sys/net/ipv4/netfilter/ip_conntrack_max: No such file or directory

net.nf_conntrack_max = 65536

sysctl: cannot stat /proc/sys/net/netfilter/nf_conntrack_tcp_timeout_established: No such file or directory

sysctl: cannot stat /proc/sys/net/ipv4/netfilter/ip_conntrack_tcp_timeout_established: No such file or directory

sysctl: cannot stat /proc/sys/net/netfilter/nf_conntrack_tcp_timeout_time_wait: No such file or directory

sysctl: cannot stat /proc/sys/net/ipv4/netfilter/ip_conntrack_tcp_timeout_time_wait: No such file or directory

I have tried to install ip_conntrack module (sudo modprobe ip_conntrack) but it was not installed. No error just lsmod | grep ip_conntrack gives no output. I am running Debian Jessie and I installed conntrackd 1.4.2

Bitlocker Network Unlock certificate issue

Posted: 18 Sep 2021 07:01 PM PDT

I'm trying to enable Bitlocker Network Unlock feature. I followed this article: https://technet.microsoft.com/en-us/library/jj574173(v=ws.11).aspx

My environment is:

  • Domain Functional Level: 2012
  • Forest Functional Level: 2008 R2
  • all Domain Controllers are running Windows 2012 R2
  • WDS & Network Unlock feature running on Windows Server 2016 (WDS running flawlessly)

Following the article I created a certificate template by copying "User" template on my CA. The template is published so it can be requested. Then, on my WDS server I open up certificates console as a user and I request a new certificate. The certificate request appears as pending on a CA, which I accept manually. The issued certificate never shows in the "Personal" store on the WDS server, even though on the CA it appears as issued. I feel this article may be wrong, because "Bitlocker Network Unlock" cert store only appears in certificate console ran as Local Computer, not the User. But the current cert template doesn't allow requests from computer accounts. What should I do?

How do I secure the access token, on Linux, to remote, automated secrets stores like Hashicorp Vault?

Posted: 18 Sep 2021 02:00 PM PDT

There seems to be a bit of a "chicken and egg" problem with the passwords to the password managers like Hashicorp Vault for Linux.

While researching this for some Linux servers, someone clever asked, "If we're storing all of our secrets in a secrets storage service, where do we store the access secret to that secrets storage service? In our secrets storage service?"

I was taken aback, since there's no point to using a separate secrets storage service if all the Linux servers I'd store the secrets on anyway have its access token.

For example, if I move my secrets to Vault, don't I still need to store the secrets to access Hashicorp Vault somewhere on the Linux server?

There is talk about solving this in some creative ways, and at least making things better than they are now. We can do clever things like auth based on CIDR or password mashups. But there is still that trade-off of security For example, if a hacker gains access to my machine, they can get to vault if the access is based on CIDR.

This question may not have an answer, in which case, the answer is "No, this has no commonly accepted silver bullet solution, go get creative, find your tradeoffs bla bla bla"

I want an answer to the following specific question:

Is there a commonly accepted way that one secures the password to a remote, automated secrets store like Hashicorp Vault on modern Linux servers?

Obviously, plaintext is out of the question.

Is there a canonical answer to this? Am I even asking this in the right place? I considered security.stackexchange.com, too, but this seemed specific to a way of storing secrets for Linux servers. I'm aware that this may seem too general, or opinion based, so I welcome any edit suggestions you might have to avoid that.

We laugh, but the answer I get on here may very well be "in vault". :/ For instance, a Jenkins server or something else has a 6-month revokable password that it uses to generate one-time-use tokens, which they then get to use to get their own little ephemeral (session limited) password generated from Vault, which gets them a segment of info.

Something like this seems to be along the same vein, although it'd only be part of the solution: Managing service passwords with Puppet

AWS vmimport - stuck on booting phase

Posted: 18 Sep 2021 09:07 PM PDT

Currently importing an OVA from an S3 bucket. Windows 2008 R2 Standard

Process stops at the booting phase

"StatusMessage": "FirstBootFailure: This import request failed because the instance failed to boot and establish network connectivity.",

This is a single volume machine that boots up fine if the OVA is reimported back to VMware.

There is a logon disclaimer box configured to appear before choosing the account to logon to.

I've followed the AWS VMimport pre-reqs, It is not domain joined, AV disabled, Windows Updates set to manual.

A similar OVA has imported fine, so struggling to understand what is different about this one.

Anyone able to offer a view on what might be the issue?

How do I set Host Groups on Fail2ban for Wordpress?

Posted: 18 Sep 2021 03:48 PM PDT

I'm trying to set up a custom filter for fail2ban on a wordpress site. I've been following this tutorial but when I try to test my custom filter, I get the error: server.failregex.RegexException: No 'host' group in '/etc/...

I've been researching this problem and I see that filters are supposed to be wrapped in (?P ... ) as per the documentation

So my file looks like this:

# Fail2Ban filter for Wordpress  #    # WP brute force attacks filter  [Definition]  failregex = (?P<host> ^ .* "POST )  /wp-login.php  ignoreregex =  

I've tried different permutations of placing the (?P ... ) around different parts of the regex but after looking around, I'm honestly not sure what the correct syntax is. Can someone explain the syntax to me so that I can get this up and running?

I'm not sure if these details matter but, my server is running Apache/PHP and has cloudflare running on it.

Thanks in advance.

How to get Mod_pagespeed to output compressed (gzip) css?

Posted: 18 Sep 2021 03:00 PM PDT

Ubuntu 14.04 Apache 2.4.7 php-FPM 5.5.9

Using latest stable Pagespeed Module for Apache (1.9.32.3-4448).

I'm in the process of optimizing a WordPress website for speed (bandwidth and rendering). Mod Deflate is set up. A plugin (Better Wordpress Minify) compresses and combines all css files into one (thus reducing the number of requests). With Pagespeed switched off, if I check the produced link (using FeedTheBot) it confirms that the content is compressed using gzip. But If Pagespeed is switched on, it shows Gzip is not working. Using PageSpeed Insight (chrome extension) confirms this.

Here is what I've tried:

  • Adding to /etc/apache2/mods-available/pagespeed.conf ModPagespeedFetchWithGzip on SetOutputFilter DEFLATE
  • check that mod Deflate is available and enabled (it appears in the list produced by apache2ctl -t -D DUMP_MODULES). The fact that if pagespeed is switched off, it works for the combined css file is another proof.

Do you have an explanation ?

Kerberos constrained delegation using Citrix NetScaler

Posted: 18 Sep 2021 09:07 PM PDT

I'm currently evaluating Citrix NetScaler VPX (NS10.5 56.12.nc) as a potential replacement for Microsoft TMG server. Kerberos Constrained Delegation is at the top of my list of mandatory features.

Example: A web application is published via TMG. Members of a certain Active Directory group are not allowed access to this site. TMG has to request credentials from the client, check group membership and then pass those credentials to the web server hosting the application.

Unfortunately moving the membership check to the web server and allowing the client to authenticate directly is not an option.

I have tried several tutorials (e.g. http://support.citrix.com/article/CTX139133) to do this with NetScaler, but to no avail.

The authentication request the browser gets does come from the NetScaler, but all it returns is this:

<HTML><HEAD><META HTTP-EQUIV="Content-Type" CONTENT="text/html; charset=UTF-8"><script type="text/javascript" src="/vpn/resources.js">  </script><script type="text/javascript" language="javascript">var Resources = new ResourceManager("/vpn/resources/{lang}", "VPN_ERRORS");</script>  </HEAD><BODY><CENTER><span id="You are not allowed to login."></span> <span id="Please contact your administrator."></span>  </CENTER><script type="text/javascript" language="javascript">Resources.Load();</script></BODY></HTML>  

This looks "broken" to me. Whitespaces being used in tag IDs. Placeholder "{lang}" not being replaced with an actual value.

I've gone through the document's troubleshooting section (5.4). Every command returns as expected. Only the last one gives me an error:

nskrb kgetcred --delegation-credential-cache=/tmp/imper_cache --out-cache=/tmp/kcd_cache http/myserver.domain.com  

Returns:

kgetcred: krb5_parse_name http/myserver.domain.com: unable to find realm of host ns-t1

"ns-t1" is the hostname of the NetScaler server.

I really hope someone can help me with this.

Thanks in advance.

Regards, Kevin

Nginx error: "Primary script unknown" while reading response header from upstream

Posted: 18 Sep 2021 01:06 PM PDT

I have installed Nginx 1.6.2 with PHP-FPM (PHP 5.5.18) under CentOS 6.6 server. I didn't touch nothing else but /etc/nginx/conf.d/default.conf file where I made some changes (see below):

server {      listen       80;      server_name  webvm devserver ;        location / {          root   /var/www/html;          index  index.php index.html index.htm;      }        error_page   500 502 503 504  /50x.html;      location = /50x.html {          root   /usr/share/nginx/html;      }        location ~ \.php$ {          try_files      $uri =404;          root           /var/www/html;          include        fastcgi_params;          fastcgi_pass   127.0.0.1:9000;          fastcgi_index  index.php;          fastcgi_param  SCRIPT_FILENAME  /scripts$fastcgi_script_name;      }  }  

After restart Nginx and try to access http://devserver/index.php file I get this error:

2014/12/01 19:48:51 [error] 5014#0: *6 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: 192.168.3.1, server: webvm, request: "GET /index.php HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "devserver"

I have checked also permissions/owner for /var/www/html with ls -l command and this is the output:

#ls -l /var/www/html/  total 4  -rw-r--r-- 1 root root 23 Dec  1 19:29 index.php  

I did not touch anything under PHP-FPM pool so /etc/php-fpm.d/www.conf have the default configuration:

listen = 127.0.0.1:9000  user = apache  group = apache  

Could be permissions the issue?

I have read several posts here (as for example 1,2,3) talking around the same error and possible solutions and tried to apply to my situation but can't get it to works so I need some help here from any, what I'm doing wrong?

Note: I get out commented lines from the file showed since aren't relevant

Backup strategy for millions of files in lots of directories

Posted: 18 Sep 2021 05:08 PM PDT

We have millions of files in lots of directories, for example:

\00\00\00\00.txt  \00\00\00\01.pdf  \00\00\00\02.html  ... so on  \05\55\12\31.txt  

backing up these to tape is slow as backing up data in this format is much slower than backing up a single large file.

The total number of files on a disk and the relative size of each file impacts backup performance. Fastest backups occur when the disk contains fewer large size files. Slowest backups occur when the disk contains thousands of small files. Backup Exec Admin Guide.

Would the backup performance significantly increase by creating a virtual hard drive, hosting the data on it once mounted then backing up the vhd instead?

I'm unsure if the underlying data within the vhd would affect this.

what are the drawbacks to this method?

Exchange 2010 HELO header change

Posted: 18 Sep 2021 05:08 PM PDT

I couldn't find any appropriate step by step guide for changing HELO header values in Exchange 2010.

The problem is that the server doesn't allow changing the Default FQDN in: EMC -> Server configuration -> Hub transport -> Receive Connectors -> Default entry. The problem comes from the reason it is Default. I've read that I have to use Power Shell to change it. If someone knows the correct commands to change this in Exchange 2010 I'd be rather thankful for this major help.

Regards!

Nginx drops connections

Posted: 18 Sep 2021 08:08 PM PDT

I'm having a setup where I use the linode nodebalancer (loadbalancer) for my nginx/php5-fpm servers. This balancer passive checks. These passive checks, check the status code of requests. If there are too many 5XX status codes the node (vps) is marked offline by the loadbalancer

The nodebalancer is putting my servers offline in a random way. When contacting linode support they came to the conclusion that there are no 500 errors, but connections are dropped (or timeout).

I can't find anything in my nginx logs. Is there any way to debug this problem and see what connections have been timed-out dropped by nginx?

EDIT I can see a lot of 408 requests from the same IP/user agent. They come in by bulk.. Is this suspicious? How would you handle this situation? Snapshot from access.log

69.30.*.* - - [22/Apr/2014:19:28:29 +0200] "POST /error/register-image-error/ HTTP/1.1" 408 0 "http://www.mysite.com/" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)"  69.30.*.* - - [22/Apr/2014:19:28:29 +0200] "POST /error/register-image-error/ HTTP/1.1" 408 0 "http://www.mysite.com/" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)"  69.30.*.* - - [22/Apr/2014:19:28:29 +0200] "POST /error/register-image-error/ HTTP/1.1" 408 0 "http://www.mysite.com/" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)"  69.30.*.* - - [22/Apr/2014:19:28:29 +0200] "POST /error/register-image-error/ HTTP/1.1" 408 0 "http://www.mysite.com/" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)"  69.30.*.* - - [22/Apr/2014:19:28:29 +0200] "POST /error/register-image-error/ HTTP/1.1" 408 0 "http://www.mysite.com/" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)"  69.30.*.* - - [22/Apr/2014:19:28:29 +0200] "POST /error/register-image-error/ HTTP/1.1" 408 0 "http://www.mysite.com/" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)"  69.30.*.* - - [22/Apr/2014:19:28:29 +0200] "POST /error/register-image-error/ HTTP/1.1" 408 0 "http://www.mysite.com/" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)"  69.30.*.* - - [22/Apr/2014:19:28:29 +0200] "POST /error/register-image-error/ HTTP/1.1" 408 0 "http://www.mysite.com/" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)"  69.30.*.* - - [22/Apr/2014:19:28:29 +0200] "POST /error/register-image-error/ HTTP/1.1" 408 0 "http://www.mysite.com/" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)"  69.30.*.* - - [22/Apr/2014:19:28:29 +0200] "POST /error/register-image-error/ HTTP/1.1" 408 0 "http://www.mysite.com/" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)"  69.30.*.* - - [22/Apr/2014:19:28:29 +0200] "POST /error/register-image-error/ HTTP/1.1" 408 0 "http://www.mysite.com/" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)"  69.30.*.* - - [22/Apr/2014:19:28:29 +0200] "POST /error/register-image-error/ HTTP/1.1" 408 0 "http://www.mysite.com/" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)"  69.30.*.* - - [22/Apr/2014:19:28:29 +0200] "POST /error/register-image-error/ HTTP/1.1" 408 0 "http://www.mysite.com/" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)"  69.30.*.* - - [22/Apr/2014:19:28:29 +0200] "POST /error/register-image-error/ HTTP/1.1" 408 0 "http://www.mysite.com/" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)"  69.30.*.* - - [22/Apr/2014:19:28:29 +0200] "POST /error/register-image-error/ HTTP/1.1" 408 0 "http://www.mysite.com/" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)"  69.30.*.* - - [22/Apr/2014:19:28:29 +0200] "POST /error/register-image-error/ HTTP/1.1" 408 0 "http://www.mysite.com/" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)"  69.30.*.* - - [22/Apr/2014:19:28:29 +0200] "POST /error/register-image-error/ HTTP/1.1" 408 0 "http://www.mysite.com/" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)"  69.30.*.* - - [22/Apr/2014:19:28:30 +0200] "POST /error/register-image-error/ HTTP/1.1" 408 0 "http://www.mysite.com/" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)"  69.30.*.* - - [22/Apr/2014:19:28:30 +0200] "POST /error/register-image-error/ HTTP/1.1" 408 0 "http://www.mysite.com/" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)"  69.30.*.* - - [22/Apr/2014:19:28:30 +0200] "POST /error/register-image-error/ HTTP/1.1" 408 0 "http://www.mysite.com/" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)"  69.30.*.* - - [22/Apr/2014:19:28:30 +0200] "POST /error/register-image-error/ HTTP/1.1" 408 0 "http://www.mysite.com/" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)"  69.30.*.* - - [22/Apr/2014:19:28:30 +0200] "POST /error/register-image-error/ HTTP/1.1" 408 0 "http://www.mysite.com/" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)"  69.30.*.* - - [22/Apr/2014:19:28:30 +0200] "POST /error/register-image-error/ HTTP/1.1" 408 0 "http://www.mysite.com/" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)"  69.30.*.* - - [22/Apr/2014:19:28:30 +0200] "POST /error/register-image-error/ HTTP/1.1" 408 0 "http://www.mysite.com/" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)"  69.30.*.* - - [22/Apr/2014:19:28:30 +0200] "POST /error/register-image-error/ HTTP/1.1" 408 0 "http://www.mysite.com/" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)"  69.30.*.* - - [22/Apr/2014:19:28:30 +0200] "POST /error/register-image-error/ HTTP/1.1" 408 0 "http://www.mysite.com/" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)"  69.30.*.* - - [22/Apr/2014:19:28:30 +0200] "POST /error/register-image-error/ HTTP/1.1" 408 0 "http://www.mysite.com/" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)"  69.30.*.* - - [22/Apr/2014:19:28:30 +0200] "POST /error/register-image-error/ HTTP/1.1" 408 0 "http://www.mysite.com/" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)"  69.30.*.* - - [22/Apr/2014:19:28:30 +0200] "POST /error/register-image-error/ HTTP/1.1" 408 0 "http://www.mysite.com/" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)"  69.30.*.* - - [22/Apr/2014:19:28:30 +0200] "POST /error/register-image-error/ HTTP/1.1" 408 0 "http://www.mysite.com/" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)"  69.30.*.* - - [22/Apr/2014:19:28:30 +0200] "POST /error/register-image-error/ HTTP/1.1" 408 0 "http://www.mysite.com/" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)"  69.30.*.* - - [22/Apr/2014:19:28:30 +0200] "POST /error/register-image-error/ HTTP/1.1" 408 0 "http://www.mysite.com/" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)"  69.30.*.* - - [22/Apr/2014:19:28:30 +0200] "POST /error/register-image-error/ HTTP/1.1" 408 0 "http://www.mysite.com/" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)"  69.30.*.* - - [22/Apr/2014:19:28:30 +0200] "POST /error/register-image-error/ HTTP/1.1" 408 0 "http://www.mysite.com/" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)"  69.30.*.* - - [22/Apr/2014:19:28:30 +0200] "POST /error/register-image-error/ HTTP/1.1" 408 0 "http://www.mysite.com/" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)"  69.30.*.* - - [22/Apr/2014:19:28:30 +0200] "POST /error/register-image-error/ HTTP/1.1" 408 0 "http://www.mysite.com/" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)"  69.30.*.* - - [22/Apr/2014:19:28:30 +0200] "POST /error/register-image-error/ HTTP/1.1" 408 0 "http://www.mysite.com/" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)"  69.30.*.* - - [22/Apr/2014:19:28:30 +0200] "POST /error/register-image-error/ HTTP/1.1" 408 0 "http://www.mysite.com/" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)"  69.30.*.* - - [22/Apr/2014:19:28:31 +0200] "POST /error/register-image-error/ HTTP/1.1" 408 0 "http://www.mysite.com/" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)"  69.30.*.* - - [22/Apr/2014:19:28:31 +0200] "POST /error/register-image-error/ HTTP/1.1" 408 0 "http://www.mysite.com/" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)"  69.30.*.* - - [22/Apr/2014:19:28:31 +0200] "POST /error/register-image-error/ HTTP/1.1" 408 0 "http://www.mysite.com/" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)"  69.30.*.* - - [22/Apr/2014:19:28:31 +0200] "POST /error/register-image-error/ HTTP/1.1" 408 0 "http://www.mysite.com/" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)"  

Thanks!

Apache server not allowing mime type

Posted: 18 Sep 2021 04:00 PM PDT

How does one set mime types in Ubuntu 12.10 for .mp4 and .ogv video rendering? I want to run simple video files through localhost. Please give suggestions.

I have kept these 3 lines in /etc/apache2/httpd.conf file:

AddType video/ogg .ogv  AddType video/mp4 .mp4  AddType video/webm .webm  

But when I run the index.html page from localhost/Ubuntu/index.html path its not running the video. I have used html5 tags for running video. Now, what could be the issue? I am using Ubuntu 12.10 and Lamp server.

smb share takes forever to connect to from Mac OS X 10.7-8

Posted: 18 Sep 2021 07:01 PM PDT

Ive got a dozen users and half of them take forever to connect to the smb share coming from a windows server 2008 r 2 standard server. Some users instantly connect with no issue.

These Mac OS X workstations have been clean formatted to see if it was a OS issue but still some take forever to connect.

I am wondering if there is something on the server side that can assist.

PHP with suexec/fcgid

Posted: 18 Sep 2021 08:08 PM PDT

httpd.conf file:

LoadModule fcgid_module modules/mod_fcgid.so  AddHandler fcgid-script .php  FCGIWrapper /usr/local/php5 .php    # manual  MaxRequestsPerProcess 1000  FcgidMaxProcesses 200  FcgidProcessLifeTime 7200  MaxProcessCount 500  FcgidIOTimeout 400  FcgidIdleTimeout 600  FcgidIdleScanInterval 90  FcgidBusyTimeout 300  FcgidBusyScanInterval 80  ErrorScanInterval 3  ZombieScanInterval 3  DefaultMinClassProcessCount 0  DefaultMaxClassProcessCount 3  MaxRequestLen 20468982    <VirtualHost *>      ServerName hostname      DocumentRoot /home/web      ServerAdmin web@web.com      <IfModule mod_suphp.c>          suPHP_UserGroup web web      </IfModule>           SuexecUserGroup web web      UserDir disable  </VirtualHost>  

and this is my wrapper:

#!/bin/sh  exec /usr/local/bin/php  

my error is:

/usr/local/apache2/logs/suexec_log

[2019-09-03 06:55:28]: user mismatch (daemon instead of www)  

/usr/local/apache2/logs/error_log

suexec policy violation: see suexec log for more details  [Tue Sep 03 06:55:28 2019] [warn] [client 127.0.0.1] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server  [Tue Sep 03 06:55:28 2019] [error] [client 127.0.0.1] Premature end of script headers: phpinfo.php  

UPDATES:

I've edited:

 /usr/local/apache2/bin/suexec -V   -D AP_DOC_ROOT="/"   -D AP_GID_MIN=100   -D AP_HTTPD_USER="www"   -D AP_LOG_EXEC="/usr/local/apache2/logs/suexec_log"   -D AP_SAFE_PATH="/usr/local/bin:/usr/bin:/bin"   -D AP_UID_MIN=100   -D AP_USERDIR_SUFFIX="www"  

but no I get no errors and nothing found...

No comments:

Post a Comment