Saturday, July 17, 2021

Recent Questions - Server Fault

Recent Questions - Server Fault


HP Server boots directly into CD Drive and not on HDD

Posted: 17 Jul 2021 09:09 PM PDT

HP Server (ProLiant DL380 G7) boots directly into CD Drive and not on HDD (C: Disk)

I am currently using a HP Windows Server for my Small office work. My Server problems for every 3 days in a consecutive way!

On 1st Day: When I Power On my Server on the First Day it boots on HDD.

On 2nd Day: When I Power On my Server on the Second Day also boots correctly into HDD.

On 3rd Day: But, On my Third Day My Server boots into CD (But I have not inserted any CDs).

Is any way to solve this problem. I have searched the internet a lot but I didn't find anything useful to me.

Advice Welcomed,

Thanks in Advance.

Configuring a cloud VM to be accessed by multiple users

Posted: 17 Jul 2021 07:54 PM PDT

I am a college CS professor. I want to have a remote server that all of my students can connect to. This is incredibly easy to do when I own the hardware. Just create user accounts on my server with the permissions I want them to have (read/write access to files in one folder, database connection), give the students the credentials. Easy. So easy.

This seems impossible to do on any of the major cloud platforms. I have tried GCP, AWS, and Azure. I've read so much documentation and I cannot find anything remotely close to my use case. All of the "for education" features force you to basically have one machine per student, not one machine all students can access. I've tried to use just regular VMs in the cloud (not "for education") and that also doesn't seem to be configurable the way I want. I just want to add user accounts to the VM and let students sign in to them. But to actually give sign in access to the VM, it seems that students need to have an account on that cloud service and I have to give their account administrative access to the VM I've created, which I do not want to do. What am I missing?

Windows IP routing POSTROUTING MASQUERADE

Posted: 17 Jul 2021 06:41 PM PDT

I'm trying to implement the same arch in the image below on Windows.

I tried many different ways with no luck. (I can achieve this on Linux with the following commands)

sudo sed -i "s/#net.ipv4.ip_forward=1/net.ipv4.ip_forward=1/g" /etc/sysctl.conf    sudo iptables -t nat -A POSTROUTING -o tun0 -j MASQUERADE  sudo netfilter-persistent save  sudo systemctl enable netfilter-persistent.service  

VPN Arch Image

Thank you

Pausing a video on one PC causes all other PC's to also pause that video that they are all streaming from one machine within a lan

Posted: 17 Jul 2021 05:51 PM PDT

I want to set up multiple computers that will play a video from one central computer in the local network, and have the video pause on all devices designated for this whenever one the computers watching presses pause.

One computer stores video files.

Several others stream these videos to their devices using something like mplayer or vlc, or even ssh. All streaming devices are linux boxes.

In mplayer, you pause a video by pressing space. I need to setup several machines that will play videos from one specific machine and when any of the receiving PC's press pause, the video also pauses on all other PC's streaming the same video. All devices involved are deployed to meet this goal.

Saving the date to actual bash history file

Posted: 17 Jul 2021 08:34 PM PDT

When adding "HISTTIMEFORMAT" to bashrc, the timestamps of when a command was executed are made available when running the "history" command.

But the timestamps themselves are not saved to the bash_history file (atleast not in plain text).

I am looking for a solution that will write the timestamp to the file itself so that archived .bash_history files from various workstations can be viewed in an editor outside the userspace and still contain the timestamps of when commands were executed.

If the timestamps are being saved to the history file itself but just not viewable in an editor when opening the bash_history file, and it is still possible to view these timestamps by using the history command itself on a rogue bash history file, then that would also suffice.

Thanks

How to setup a failover server with KVM and DRBD

Posted: 17 Jul 2021 04:24 PM PDT

I have a server that I am virtualizing using the Virtualizor control panel. I need to set up a failover server that syncs everything on the first server to the second server. I am using DRBD to sync the servers. I have a couple of questions as I am new to this.

When using DRBD my understanding is that DRBD syncs partitions. In order to set this up should I sync just the KVMs or do I sync the partition that has the OS as well? If I make a configuration on server1 to the OS shouldn't I want that to sync to server2?

When using Virtualizor is it better practice to use Virtualizor on a separate server than where the KVMs are stored?

Microsoft Remote Desktop File Share Fail (VPN)

Posted: 17 Jul 2021 03:42 PM PDT

File sharing via Windows Remote Desktop over a certain F5 Networks VPN no longer works. I cannot see redirected folders on the remote Win Server when connecting with Mac or Windows 10 client (VM) over VPN.

The problem started about 11 months earlier. There is no error. Problem: the defined "Redirect" folders on the Mac client no longer show up on the Windows server and now I cannot transfer files between these two computers. What ports are used by remote desktop for this feature? Also, what other ports are required by RDP? I'm sure the Win server was updated around that time with a newer patch. Troubleshooting involved many steps, including: recreating the defined folders on the Mac side. Updating the RDP software. Recreating shares on the Windows side but was not able to initiate share over VPN.... Please advise!

I recently moved the VPN connection and RDP software directly to Windows 10 and I still have the same problem. Do you think port is being blocked on the VPN or something on the server (where I have Admin access).

What does RDP need for the file sharing? I'm assuming it's beyond TCP/UDP 3389. I look forward to your response. THANKS!

Mac Client Version: 10.6.6 (1883) F5 Networks VPN Windows Server 2016 Datacenter (1607, Build 14393.4467)

Ubuntu 20.04: su command bash-autocomplete stopped working

Posted: 17 Jul 2021 03:41 PM PDT

I am using Ubuntu 20.04. The su command autocomplete stopped working. For example when I type:

su [TabTabTab]

It lists the files and directories of the current directory, where as it should list the available users.

Bash auto-complete for other commands is working fine for example:

apt-get [TabTabTab] It lists the available apt-get options.

usermod [TabTabTab] It lists the available user accounts.

The su command was working fine before and now it is not. I have no idea when it stopped working.

I have checked other questions but most of them are related to bash-completion, not specific to the su command. So, before marking it duplicate please check the existing answer if it addresses the su command.

AWS EC2 instance behind AWS ELB cannot get the real client's IP address

Posted: 17 Jul 2021 06:01 PM PDT

I am very new with nginx setting. My API application running in an EC2 instance which is automatically created by my AWS Elastic Beanstalk environment. The application ues Nginx and the instance is behind ELB load balancer (classic). Route 53 domain routes the traffic to the ELB.

I send the packet from Postman or Packet Sender to that domain, but can never receive response. After check the nginx error log, find that the client IP is displayed as 10.0.2.63, but not the IP address of my PC. My real IP address 149.15x.1xx.2xx. I guess 10.0.2.63 is the VPC IP address. Below is the nginx error log.

while reading PROXY protocol, client: 10.0.2.63, server: 0.0.0.0:80  

To my understanding, because the client IP is VPC IP address, the EC2 instance cannot send the reply to the real client IP address(my PC's IP address), therefore, Postmand or Packet Sender receives empty response.

Does I understand correctly? How can I let the EC2 instance receive the real client IP address? Like below:

while reading PROXY protocol, client: 149.15x.1xx.2xx, server: 0.0.0.0:80  

I don't know it is the problem with AWS ELB setting or nginx setting.

The nginx config in my EC2 instance is:

files:  /etc/nginx/conf.d/proxy.conf:  content: |    client_max_body_size 500M;    server_names_hash_bucket_size 128;      upstream backend {      server unix:///var/run/puma/my_app.sock;    }      server {      listen 80 proxy_protocol;        access_log /var/log/nginx/access.log;      error_log /var/log/nginx/error.log;        large_client_header_buffers 8 32k;        set_real_ip_from 10.0.0.0/8;      real_ip_header X-Forwarded-For;        location / {        proxy_http_version 1.1;        proxy_set_header X-Real-IP $proxy_protocol_addr;        proxy_set_header X-Forwarded-For $proxy_protocol_addr;        proxy_set_header Host $http_host;        proxy_set_header X-NginX-Proxy true;        proxy_buffers 8 32k;        proxy_buffer_size 64k;        proxy_pass http://backend;        proxy_redirect off;          # Enables WebSocket support        location /v1/cable {          proxy_pass http://backend;          proxy_http_version 1.1;          proxy_set_header Upgrade "websocket";          proxy_set_header Connection "Upgrade";          proxy_set_header X-Real-IP $proxy_protocol_addr;          proxy_set_header X-Forwarded-For $proxy_protocol_addr;        }      }    }  

Hosting of inconsistent workloads in Azure

Posted: 17 Jul 2021 02:43 PM PDT

In our company we have a lot of algorithms which need to process large datasets. The time to run these algorithms differs from a few minutes to hours. They also need to be run ad-hoc from multiple times in a week to once a month. We would like to trigger these algorithms with an event such as a file upload in Azure blob storage or and API call.

To solve this I started looking into queued processing of tasks in Azure. At first I thought that Azure functions might be a good solution, because it's pay as you use. The problem with this is that they are not meant for long running operations. Thus I started looking elsewhere. I found two pretty good alternatives namely Azure WebJobs or Jobs in Azure Kubernetes Services. The problem with both of them is that they still need an active server even though nothing is running on them. This could be quite expensive for tasks that only need to be run once a month.

My question is thus: does there exist a solution in Azure for hosting long running jobs without needing a dedicated server running 24-7?

How to block Filetransfer through RDP (Port 3389)?

Posted: 17 Jul 2021 03:34 PM PDT

For security reasons I have to restrict/disable file transfer via RDP (port 3389) from and to Remote Machines (Windows 10). Is the file transfer tunneled through port 3389, or can I safely prevent a file transfer by blocking port 139/445 SMB? A GPO would be too uncertain for me at this point.

Docker Compose WordPress, where are my WordPress files stored

Posted: 17 Jul 2021 05:06 PM PDT

I have successfully setup WordPress following the official instructions on docker's documentation. I am running windows and I can't seem to figure out where I edit my WordPress files such as wp-content so on so fourth. Here is my docker-compose.yml that I used to setup the container. Thanks ahead of time. Does it have something to do with the volumes setting? I shared my C drive with docker.

version: '3.3'    services:     db:       image: mysql:5.7       volumes:         - db_data:/var/lib/mysql       restart: always       environment:         MYSQL_ROOT_PASSWORD: somewordpress         MYSQL_DATABASE: wordpress         MYSQL_USER: wordpress         MYSQL_PASSWORD: wordpress       wordpress:       depends_on:         - db       image: wordpress:latest       ports:         - "8080:80"       restart: always       environment:         WORDPRESS_DB_HOST: db:3306         WORDPRESS_DB_USER: wordpress         WORDPRESS_DB_PASSWORD: wordpress  volumes:      db_data: {}  

Edit: I would like to have my WordPress files in the same directory that I setup the container which is C:/Users/andersk/sites/wordpress.

sudo user is not allowed to execute systemctl

Posted: 17 Jul 2021 10:08 PM PDT

I'm trying to allow a user to use sudo to manage a custom systemctl service, this however seems to fail and I can't figure out why.

[root@testvm sudoers.d]# ll  total 16  -r--r-----. 1 root root 334 Oct  9 15:42 20_appgroup  -r--r-----. 1 root root 104 Sep 17 11:24 98_admins  

The 'appgroup' contains this;

[root@testvm sudoers.d]# cat 20_appgroup  %appgroup    ALL= /usr/bin/systemctl restart test.service,   /usr/bin/systemctl start test.service, /usr/bin/systemctl stop   test.service, /usr/bin/systemctl status test.service  

I have double checked that the user is member of the appgroup, however when this user runs sudo systemctl start test.service this results in an error saying;

Sorry, user tester is not allowed to execute '/usr/bin/systemctl start test' as root on testvm.  

Any thought on what could be the issue?

Using auditd and retaining log files for 6 months.

Posted: 17 Jul 2021 07:04 PM PDT

Disclaimer: I'm not an accredited nor very experienced sysadmin but have been tasked with some sysadmin responsibilities

Task: Find a way to log all account management activities (e.g., account creation, modification, deletion, etc.) on an Ubuntu 16.04 LTS server and retain the logging information for at least 6 months.

Details:

  • The previous sysadmin had installed auditd to the system as a first step in solving this issue.

    When running:

    sudo systemctl status auditd.service  

    systemd spits back that the service is successfully running and listening for events. It is my understanding that this package (auditd) is what I need to accomplish the task. The service seems to already be running and logging so where can I find and retain the log files for 6 months?

  • The file "/var/log/audit/audit.log" exists and the file is populated with audit information

  • Reading more information online about how Auditd works, I suspect the solution may be in configuring how the audit log is Rotated. I do not fully understand how rotations work but I believe log files are being Rotated when the file size reaches a certain limit and not by how much time has elapsed. I think I can configure Rotations by altering the file "/etc/audit/auditd.conf".

So, knowing these details (please ask for more information if you need it), how may I go about accomplishing the Task?

Many thanks for all the help in advance!

Apache: 503 Service Unavailable sometimes without any server load

Posted: 17 Jul 2021 08:06 PM PDT

I have an apache server v2. Sometimes I get 503 error without any server load at all this error appears randomly not at any specific time or when using specific services. How to find the cause or trace it? I've checked error logs and last modified date is yesterday though the error appeared today multiple times. Thanks and regards.

EdgeRouter X as VLAN-only Switch

Posted: 17 Jul 2021 08:06 PM PDT

The Ubiquiti EdgeRouter X (ERX) has a switching chip on board so that it can be used as an L3 switch instead of as a router.

I have another router, we'll call it router-core, which is serving an internal network on VLAN 100 on my local network. What I would like is to be able to configure my ERX so that the following behavior occurs when I connect it to my network:

  • The ERX does not get an IP address on VLAN 1
  • The ERX does get an IP address from my router-core on VLAN 100
  • Any other clients I connect to the ERX are automatically dropped onto VLAN 100, and subsequently can talk to the router-core.

Essentially, I am trying to configure the ERX as a smart switch with all the ports tagged for VLAN 100. This seems like it would be straightforward, but evidently it is not. (Note: in the linked thread its stated that what I'm trying to do isn't supported, but the thread is nearly five years old now, so I'm looking for newer info if it exists)

I have tried the following configurations:

  • Attempt #1:
    • switch0 address set to DHCP
    • switch0 vlan-aware enabled
    • Switch ports eth0-eth4 set so pvid is 100
  • Attempt #2: (with this one, switch0.200 got a DHCP lease from router-core but no client did)
    • switch0.200 address set to DHCP
    • switch0 vlan-aware set to disabled
    • Switch ports eth0-eth4 set with no VLAN configuration

The only other option I'm seeing is to create a bridged interface and try to work with that, but that loses all the performance of having a dedicated switching chip, which would be very frustrating.

Any help would be greatly appreciated.

Rewriting facility/severity in rsyslog v7 before shipping off to a remote collector

Posted: 17 Jul 2021 06:01 PM PDT

I have a machine "A" with a local rsyslogd, and a remote collector machine "B" elsewhere listening with its own syslog daemon and log processing engine. It all works great...except that there is one process on A that logs at local0.notice, which is something that B's engine can't handle.

What I want to do is rewrite local0.notice to local5.info before the event is shipped off to B. Unfortunately I can't change B and I can't change the way the process does it's logging on A. Nor can I upgrade rsyslogd on A from v7.6 to v8 (which appears to have some very useful-looking features, like mmexternal, which might have helped).

I think I must be missing something obvious, I can't be the first person to need this type of feature. Basically it comes down to finding some way of passing through rsyslog twice with a filter in between: once as the process logs, through the filter to change the prio, and then again to forward it on.

What I've tried:

  • configuring rsyslog to log local0.notice to a file, and then reading that file with an imfile directive that tags it and sets the new fac/sev, followed by an if statement that looks for the tag and calls an omfwd action. I thought perhaps I could persuade rsyslog to write a file at the right prio and then have rsyslog come back around and naturally pick it up. Sadly, no dice.
  • loading an omprog module that calls logger -p local5.info if syslogfacility-text == 'local0', stopping processing there...and then having another config element check for syslogfacility-text == 'local5' and if so calling an omfwd action. Strangely this works but doesn't squash the original messages, now I just get two sets of logs being forwarded to B, one local0 and one local5.

Are there any solutions out there?

Where is default soft limit config file debian?

Posted: 17 Jul 2021 04:06 PM PDT

I have a process running as root that is capped to 1024 ( in reality lsof shows me up to 1031 for it) open files but I don't find the file to modify this limit.

Here is the output of cat /proc/PID/limits to confirm it

    #cat /proc/32531/limits      Limit                     Soft Limit           Hard Limit           Units       Max cpu time              unlimited            unlimited            seconds     Max file size             unlimited            unlimited            bytes       Max data size             unlimited            unlimited            bytes       Max stack size            8388608              unlimited            bytes       Max core file size        0                    unlimited            bytes       Max resident set          unlimited            unlimited            bytes       Max processes             515045               515045               processes   Max open files            1024                 4096                 files       Max locked memory         65536                65536                bytes       Max address space         unlimited            unlimited            bytes       Max file locks            unlimited            unlimited            locks       Max pending signals       515045               515045               signals     Max msgqueue size         819200               819200               bytes       Max nice priority         0                    0                      Max realtime priority     0                    0                      Max realtime timeout      unlimited            unlimited            us    

However, I can't find that limit in "classic" config files :

#cat /proc/sys/fs/file-max   13106306    #ulimit -S -a  core file size          (blocks, -c) 0  data seg size           (kbytes, -d) unlimited  scheduling priority             (-e) 0  file size               (blocks, -f) unlimited  pending signals                 (-i) 515045  max locked memory       (kbytes, -l) 64  max memory size         (kbytes, -m) unlimited  open files                      (-n) 65536  pipe size            (512 bytes, -p) 8  POSIX message queues     (bytes, -q) 819200  real-time priority              (-r) 0  stack size              (kbytes, -s) 8192  cpu time               (seconds, -t) unlimited  max user processes              (-u) 515045  virtual memory          (kbytes, -v) unlimited  file locks                      (-x) unlimited    #ulimit -H -a  core file size          (blocks, -c) unlimited  data seg size           (kbytes, -d) unlimited  scheduling priority             (-e) 0  file size               (blocks, -f) unlimited  pending signals                 (-i) 515045  max locked memory       (kbytes, -l) 64  max memory size         (kbytes, -m) unlimited  open files                      (-n) 65536  pipe size            (512 bytes, -p) 8  POSIX message queues     (bytes, -q) 819200  real-time priority              (-r) 0  stack size              (kbytes, -s) unlimited  cpu time               (seconds, -t) unlimited  max user processes              (-u) 515045  virtual memory          (kbytes, -v) unlimited  file locks                      (-x) unlimited  

/etc/security/limits.conf is fully commented and /etc/security/limits.d/ is empty

I'm running debian 8.8 (jessie) on Linux version 3.14.32-xxxx-grs-ipv6-64 (kernel@kernel.ovh.net) (gcc version 4.9.2 (Debian 4.9.2-10) )

Thanks,

LDAP (with ppolicy) errors on changing other user's password

Posted: 17 Jul 2021 06:01 PM PDT

I've set up an LDAP server with the ppolicy overlay, but now am having trouble resetting user's password in some cases: if the user has a failed login, then the pwdFailureTime attribute exists and ldapmodify fails complaining that it doesn't.

If my most recent log-in attempt was successful, then I can bind as cn=admin and run the ldif file:

dn: uid=anton,ou=accounts,dc=[redacted],dc=ca  changetype: modify  replace: userPassword  userPassword: foobar  -  replace: pwdReset  pwdReset: TRUE  

which succeeds. However, if the last log-in attempt was with a wrong password, ppolicy adds a pwdFailureTime attribute to the account, and then trying to run the ldif file above results in:

$ ldapmodify -x -D "cn=admin,dc=[redacted],dc=ca" -W -H ldap:// -f pwreset.ldif  Enter LDAP Password:   modifying entry "uid=anton,ou=accounts,dc=[redacted],dc=ca"  ldap_modify: No such attribute (16)      additional info: modify/delete: pwdFailureTime: no such attribute  

If I try deleting the pwdFailureTime attribute before resetting the password, then I get:

ldap_modify: Constraint violation (19)      additional info: pwdFailureTime: no user modification allowed  

In real life, if a user's forgotten their password and needs it reset, they will generally have tried to recall the password several times, so will have the pwdFailureTime attribute set. Any suggestions?

Directory listing isn't working on nginx showing 404 error

Posted: 17 Jul 2021 05:37 PM PDT

UPDATED nginx.conf FILE AND ANYTHING ELSE TO THE LATEST STATE
BUT STILL GETTING 404 ERRORS :-(

So i was trying to setup directory listing on my server with nginx, i followed the instructions step-by-step but nothing worked out -- always popping either 403 or 404 errors while permissions are all set to 755...

When i enable autoindex on the root location it worked fine, but when i put it on the "dl/" location, it either shows a 404 when requesting /dl or 403 when requesting /dl/

After i followed @Bryce Larson's steps...403 is gone now only 404 is there...which is still not okay...


# pwd
/root/Downloads/dl

# ls -lha
total 12K drwxr-xr-x 2 nginx root 4.0K Nov 25 20:01 . drwxr-xr-x 4 root root 4.0K Nov 26 09:11 .. -rwxr-xr-x 1 nginx root 26 Nov 25 20:01 blah.txt


Here's the nginx.conf:
https://0bin.net/paste/he2oIb2OFou4G9Fd#v5qt5M7scM8jlSRkl9B+GepP+PoInAHrfZrJNJ7Ch9U I'm gonna use 0bin for long code/config etc to save time&effort, plus it's got syntax coloring ;-)


And yeah i've restarted nginx a hundred times just to make sure it takes the new config...so what's wrong now?

Otherwise, how would you configure the nginx server for this purpose? -- your own nginx.conf files are welcomed plz paste it here: https://0bin.net

Exchange mailbox forwarding - emails fail dkim body hash

Posted: 17 Jul 2021 05:06 PM PDT

Exchange is modifying emails before forwarding them out to an external Google Apps account. I'm hoping to find a way to fix this.

Here's some more detail:

Using Exchange 2010 SP3 Version 14.3.123.4

The exchange server is forwarding email of some users out to Google Apps accounts (using an External Contact in AD). Exchange is set to put the emails in the user's mailbox and also forward a copy to their Google Apps account. The issue is that outside emails (from @google.com for example) are failing the DKIM check on the Google Apps side after being forwarded from Exchange and they are marked as spam. I got this info from looking at the email source and seeing this message:

Authentication-Results: mx.google.com;         dkim=neutral (body hash did not verify) header.i=@example.com;         spf=pass (google.com: domain of user@mydomain.com designates 1.1.1.1 as permitted sender) smtp.mailfrom=user@mydomain.com;         dmarc=fail (p=REJECT dis=NONE) header.from=example.com  
  • user@mydomain.com - a user with an exchange mailbox and Google apps
    account
  • 1.1.1.1 - outside IP of Exchange server, included in SPF record
  • example.com - outside public domain that has dmarc configured

Testing and results of direct vs forwarded emails:

Below is a sample of two emails. One email was sent to the Exchange server user's email address, the other email was sent directly to the Google Apps email address using the temporary Google Apps assigned domain alias (user@mydomain.com.test-google-a.com).

The subject and body were exactly the same in both emails sent out. The only difference between the two received is that the Exchange forwarded email had modified the body boundaries and the charset value now has quotes around the UTF-8.

Direct to Gmail (user@mydomain.com.test-google-a.com):

Content-Type: multipart/alternative; boundary=001a1149a47ee5ea57053414b981    --001a1149a47ee5ea57053414b981  Content-Type: text/plain; charset=UTF-8    Test body    --001a1149a47ee5ea57053414b981  Content-Type: text/html; charset=UTF-8    <div dir="ltr">Test body  </div>    --001a1149a47ee5ea57053414b981--  

Forwarded from Exchange (user@mydomain.com):

Content-Type: multipart/alternative; boundary="001a1149a47ee5ea57053414b981"    --001a1149a47ee5ea57053414b981  Content-Type: text/plain; charset="UTF-8"    Test body    --001a1149a47ee5ea57053414b981  Content-Type: text/html; charset="UTF-8"    <div dir="ltr">Test body  </div>    --001a1149a47ee5ea57053414b981--  

I have a feeling that the DKIM fails because Exchange has added the quotes to the charset and boundary parameter values. Hopefully there is a way to disable this and then emails will pass the dkim without issue.

How to create virtual networks by using libvirt?

Posted: 17 Jul 2021 09:08 PM PDT

I have installed qemu/kvm and have tried to create some virtual machines and network them together.

What I would like to achieve is 2-3 virtual machines in their own private network (e.g. 10.0.0.0/24), all machines should be able to access external network, but only 1 machine should get IP that is accessible from outside.

External Network    .                     +-----------------+    |                     | VM 1            |    |                  +--| IP: 10.0.0.11   |  +-----------------+  |  | IP: 82.130.y.y  |  | Host            |--|  +-----------------+  | IP: 82.130.x.x  |  |  +-----------------+  |  +-----------------+                       |--| VM 2            |                       |  | IP: 10.0.0.12   |                       |  +-----------------+                       |                       |  +-----------------+                       +--| VM 3            |                          | IP: 10.0.0.13   |                          +-----------------+  

I've tried to to add br0-bridge with brctl and bridged it with eth0, but that set also my host's nameserver to 192.168.1.1 and made it inacessible.

How should I do the configuration?

My current setup:

Name servers:

# /etc/resolv.conf   domain kyla.fi  search kyla.fi  nameserver 82.130.0.1  nameserver 82.130.63.1  

Interfaces and IP addresses:

# ip addr  1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default       link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00      inet 127.0.0.1/8 scope host lo         valid_lft forever preferred_lft forever      inet6 ::1/128 scope host          valid_lft forever preferred_lft forever  2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000      link/ether f4:6d:04:71:c4:1f brd ff:ff:ff:ff:ff:ff      inet 82.130.x.x/26 brd 82.130.x.255 scope global eth0         valid_lft forever preferred_lft forever  

edit: Added configuration for br0:

# The primary network interface  #auto eth0  iface eth0 inet manual    auto br0  iface br0 inet dhcp      bridge_ports eth0      bridge_stp off      bridge_fd 0      bridge_maxwait 0  

just virbr0 missing

Event ID 521 - Critical Logging Failure on Domain Controllers

Posted: 17 Jul 2021 09:08 PM PDT

I'm tasked with the monitoring and analysis of variious logs via our SIEM solution; LogRhythm.

I noticed a few weeks back that we had large volumes of this event originating from all of our domain controllers. The log data is as follows:

EventID: 521    Event Data: unable to log events to the security log    Status code: 0x80000005    Value of CrashonAuditFail: 0    Number of failed audits: 1  

I've ensured that all domain controllers have sufficient disk space to write to the log & that the logs are configured to overwrite the oldest logs first. Servers have been bounced in the last few days but the issue remains.

I have read some suggestions about renaming the security event and restarting the machine so that a new event file is created but I can't believe that the event file has become corrupt on all domain controllers.

It's also worth noting that all of the impacted domain controllers are in fact writing other events to the security event log!

We are getting ~61.34k of these events a day.

Any pointers would be massively appreciated.

Windows Server 2012 Terminal Server Degrading Performance on User Session

Posted: 17 Jul 2021 04:06 PM PDT

We have a terminal server environment with about 40 users which is experiencing a curious performance issue: when a given user logs in initially, everything functions properly, once a particular user starts to eat up more resources (upwards of 2GB/memory and 2%-5% of overall CPU usage), their applications seem to slow down considerably. If I have the user close everything, log off and log back in, performance on the applications is restored.

It's almost as if there's some kind of throttling on resources going on for each user session.

Has anyone experienced this phenonmenon? The server resources are adequate as at peak we're using 50%-70% CPU and about 75% of memory.

Thanks in advance!

How to reference a hiera variable from elsewhere the hierarchy?

Posted: 17 Jul 2021 02:46 PM PDT

So suppose in a very specific hiera YAML file I define a variable, such as "env_name".

env_name: "dev-unstable"

Now in a more general hiera file I'd like to interpolate that variable into a string.

server_name: "service-%{env_name}.%{::domain}"

My testing seems to imply that hiera variables from elsewhere in the hierarchy aren't made available for interpolation in general cases. Is that true, unfortunately?

Apache ForceType / SetHandler not responding as expected

Posted: 17 Jul 2021 07:04 PM PDT

I am trying to force apache to handle a file (or directory of files) as php regardless of file extension.

The link to the file should be as follows, Http://mysitehere.info/sig2/name.png

I have tried,

<FilesMatch "\.(jpg|jpeg|png|gif|swf|flv|ico)$"> SetHandler application/x-httpd-php </FilesMatch>

This does not work, returning a broken image icon with, the name.png filename and a 404 not found when tried with sig2/name.png/

I have also tried,

<Files .+*^$[]()> ForceType application/x-httpd-php SetHandler application/x-httpd-php </Files>

I had gotten that .htaccess file content from ForceType Sethandler Code Why Does This Work This returned the same results as the first try. Nothing but 404 or a broken image.

I also tried,

<Files "name.png"> SetHandler application/x-httpd-php </Files>

This does "work" but it doesn't do what it needs to. Accessing the image by name.png gives me a broken image. Accessing it by name.png/ works for some reason that I am unsure of.

I have made sure that AllowOverride All is set in my httpd.conf for the directory of the image(s). <Directory /var/www/> Options Indexes FollowSymLinks AllowOverride All Require All Granted </Directory>

This should be enough to get Sethandler or ForceType to work I would assume however I still can't get the effect I want. Note, I do not have mod_rewrite installed to my server. Also of note that SetHandler and ForceType have the same results when used with <Files "name.png"> I am running PHP 5.5.9? and the latest version of Apache2.

With that, am I doing something horribly wrong or do I have a missing module required for sethandler?

MariaDB crashing with "Assertion failure in thread xxx in file rem0rec.cc line 580"

Posted: 17 Jul 2021 03:02 PM PDT

I have three MariaDB servers set up in a Galera cluster. I use one server at a time as a "primary" master (i.e., Galera is just for failover, the app doesn't actively use multiple masters).

About once every two weeks or so, the primary master fails. The other two servers in the cluster are fine, and I can restart the crashed server and it recovers fine.

I've switched between which of the three servers are the "primary" master, and the crash happens no matter which server I choose. So it seems unlikely that it's related to hardware.

The question is -- why is this happening? How do I track it down? Should I just submit this to MariaDB as a bug?

2015-04-09 02:02:38 7f788745a700  InnoDB: Assertion failure in thread 140155642291968 in file rem0rec.cc line 580  InnoDB: We intentionally generate a memory trap.  InnoDB: Submit a detailed bug report to http://bugs.mysql.com.  InnoDB: If you get repeated assertion failures or crashes, even  InnoDB: immediately after the mysqld startup, there may be  InnoDB: corruption in the InnoDB tablespace. Please refer to  InnoDB: http://dev.mysql.com/doc/refman/5.6/en/forcing-innodb-recovery.html  InnoDB: about forcing recovery.  150409  2:02:38 [ERROR] mysqld got signal 6 ;  This could be because you hit a bug. It is also possible that this binary  or one of the libraries it was linked against is corrupt, improperly built,  or misconfigured. This error can also be caused by malfunctioning hardware.    To report this bug, see http://kb.askmonty.org/en/reporting-bugs    We will try our best to scrape up some info that will hopefully help  diagnose the problem, but since we have already crashed,  something is definitely wrong and this may fail.    Server version: 10.0.16-MariaDB-1~trusty-wsrep-log  key_buffer_size=52428800  read_buffer_size=131072  max_used_connections=128  max_threads=402  thread_count=11  It is possible that mysqld could use up to  key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 934441 K  bytes of memory  Hope that's ok; if not, decrease some variables in the equation.    Thread pointer: 0x0x7f75176b3008  Attempting backtrace. You can use the following information to find out  where mysqld died. If you see no messages after this, something went  terribly wrong...  stack_bottom = 0x7f7887459df0 thread_stack 0x30000  150409  2:02:44 [Warning] WSREP: last inactive check more than PT1.5S ago (PT5.98149S), skipping check  150409  2:02:44 [Note] WSREP: (c86d2afe-da1f-11e4-befa-264d853d1e46, 'tcp://0.0.0.0:4567') address 'tcp://192.168.178.10:4567' pointing to uuid c86d2afe-da1f-11e4-befa-264d853d1e46 is blacklisted, skipping  150409  2:02:44 [Note] WSREP: (c86d2afe-da1f-11e4-befa-264d853d1e46, 'tcp://0.0.0.0:4567') address 'tcp://192.168.178.10:4567' pointing to uuid c86d2afe-da1f-11e4-befa-264d853d1e46 is blacklisted, skipping  150409  2:02:44 [Note] WSREP: (c86d2afe-da1f-11e4-befa-264d853d1e46, 'tcp://0.0.0.0:4567') address 'tcp://192.168.178.10:4567' pointing to uuid c86d2afe-da1f-11e4-befa-264d853d1e46 is blacklisted, skipping  150409  2:02:44 [Note] WSREP: (c86d2afe-da1f-11e4-befa-264d853d1e46, 'tcp://0.0.0.0:4567') address 'tcp://192.168.178.10:4567' pointing to uuid c86d2afe-da1f-11e4-befa-264d853d1e46 is blacklisted, skipping  150409  2:02:44 [Note] WSREP: view(view_id(NON_PRIM,70802785-d454-11e4-9152-2b6d076ff37a,26) memb {      c86d2afe-da1f-11e4-befa-264d853d1e46,0  } joined {  } left {  } partitioned {      70802785-d454-11e4-9152-2b6d076ff37a,0      e18a3f1a-c314-11e4-a25a-c6a751e32d91,0  })  150409  2:02:44 [Note] WSREP: view(view_id(NON_PRIM,c86d2afe-da1f-11e4-befa-264d853d1e46,27) memb {      c86d2afe-da1f-11e4-befa-264d853d1e46,0  } joined {  } left {  } partitioned {      70802785-d454-11e4-9152-2b6d076ff37a,0      e18a3f1a-c314-11e4-a25a-c6a751e32d91,0  })  150409  2:02:44 [Note] WSREP: (c86d2afe-da1f-11e4-befa-264d853d1e46, 'tcp://0.0.0.0:4567') address 'tcp://192.168.178.10:4567' pointing to uuid c86d2afe-da1f-11e4-befa-264d853d1e46 is blacklisted, skipping  150409  2:02:44 [Note] WSREP: New COMPONENT: primary = no, bootstrap = no, my_idx = 0, memb_num = 1  150409  2:02:44 [Note] WSREP: Flow-control interval: [16, 16]  150409  2:02:44 [Note] WSREP: Received NON-PRIMARY.  150409  2:02:44 [Note] WSREP: Shifting SYNCED -> OPEN (TO: 497086935)  150409  2:02:44 [Note] WSREP: New COMPONENT: primary = no, bootstrap = no, my_idx = 0, memb_num = 1  150409  2:02:44 [Note] WSREP: Flow-control interval: [16, 16]  150409  2:02:44 [Note] WSREP: Received NON-PRIMARY.  150409  2:02:44 [Note] WSREP: New cluster view: global state: ec05ddd0-c265-11e4-b715-e69a238eb511:497086935, view# -1: non-Primary, number of nodes: 1, my index: 0, protocol version 3  150409  2:02:44 [Warning] WSREP: Send action {(nil), 250, TORDERED} returned -107 (Transport endpoint is not connected)  150409  2:02:44 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.  150409  2:02:44 [Note] WSREP: New cluster view: global state: ec05ddd0-c265-11e4-b715-e69a238eb511:497086935, view# -1: non-Primary, number of nodes: 1, my index: 0, protocol version 3  150409  2:02:44 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.  150409  2:02:44 [Note] WSREP: (c86d2afe-da1f-11e4-befa-264d853d1e46, 'tcp://0.0.0.0:4567') turning message relay requesting on, nonlive peers: tcp://192.168.177.11:4567 tcp://192.168.179.12:4567  /usr/sbin/mysqld(my_print_stacktrace+0x2e)[0x7f7898d74c7e]  /usr/sbin/mysqld(handle_fatal_signal+0x457)[0x7f78988ac8a7]  /lib/x86_64-linux-gnu/libpthread.so.0(+0x10340)[0x7f7897059340]  /lib/x86_64-linux-gnu/libc.so.6(gsignal+0x39)[0x7f78966b0cc9]  /lib/x86_64-linux-gnu/libc.so.6(abort+0x148)[0x7f78966b40d8]  /usr/sbin/mysqld(+0x8832eb)[0x7f7898b9f2eb]  /usr/sbin/mysqld(+0x8858ff)[0x7f7898ba18ff]  /usr/sbin/mysqld(+0x802c9e)[0x7f7898b1ec9e]  /usr/sbin/mysqld(+0x892af5)[0x7f7898baeaf5]  /usr/sbin/mysqld(+0x895133)[0x7f7898bb1133]  /usr/sbin/mysqld(+0x8bece8)[0x7f7898bdace8]  /usr/sbin/mysqld(+0x8c3361)[0x7f7898bdf361]  /usr/sbin/mysqld(+0x8c3c27)[0x7f7898bdfc27]  /usr/sbin/mysqld(+0x8a4689)[0x7f7898bc0689]  /usr/sbin/mysqld(+0x804fb7)[0x7f7898b20fb7]  /usr/sbin/mysqld(_ZN7handler13ha_delete_rowEPKh+0x3f7)[0x7f78988b7b27]  /usr/sbin/mysqld(_Z12mysql_deleteP3THDP10TABLE_LISTP4ItemP10SQL_I_ListI8st_orderEyyP13select_result+0xf3e)[0x7f78989f047e]  /usr/sbin/mysqld(_Z21mysql_execute_commandP3THD+0x23cb)[0x7f7898723fcb]  /usr/sbin/mysqld(+0x40f7b7)[0x7f789872b7b7]  /usr/sbin/mysqld(_Z16dispatch_command19enum_server_commandP3THDPcj+0x1ebb)[0x7f789872dd1b]  /usr/sbin/mysqld(_Z10do_commandP3THD+0x20f)[0x7f789872e9bf]  /usr/sbin/mysqld(_Z24do_handle_one_connectionP3THD+0x1fb)[0x7f78987fcbcb]  /usr/sbin/mysqld(handle_one_connection+0x40)[0x7f78987fcdb0]  /lib/x86_64-linux-gnu/libpthread.so.0(+0x8182)[0x7f7897051182]  /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7f789677447d]    Trying to get some variables.  Some pointers may be invalid and cause the dump to abort.  Query (0x7f750940f020): is an invalid pointer  Connection ID (thread ID): 25689442  Status: NOT_KILLED    Optimizer switch: index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_merge_sort_intersection=off,engine_condition_pushdown=off,index_condition_pushdown=on,derived_merge=on,derived_with_keys=on,firstmatch=on,loosescan=on,materialization=on,in_to_exists=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on,subquery_cache=on,mrr=off,mrr_cost_based=off,mrr_sort_keys=off,outer_join_with_cache=on,semijoin_with_cache=on,join_cache_incremental=on,join_cache_hashed=on,join_cache_bka=on,optimize_join_buffer_size=off,table_elimination=on,extended_keys=on,exists_to_in=on    The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains  information that should help you find out what is causing the crash.  150409 02:02:46 mysqld_safe Number of processes running now: 0  150409 02:02:46 mysqld_safe WSREP: not restarting wsrep node automatically  150409 02:02:46 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended  

exactly 90 seconds to restart apache httpd

Posted: 17 Jul 2021 07:24 PM PDT

I have an openSUSE 13.1 VM (host runs Virtualbox 4.2.18, also on openSUSE 13.1) and restarting httpd (Apache/2.4.6) always takes 1.5 minute:

foobar:~ # time /etc/init.d/apache2 restart  redirecting to systemctl restart apache2.service    real    1m30.778s  user    0m0.004s  sys     0m0.000s  

Immediately subsequent restart is normal (very fast):

foobar:~ # time /etc/init.d/apache2 restart  redirecting to systemctl restart apache2.service    real    0m1.023s  user    0m0.004s  sys     0m0.000s  

5 minutes later the restart time goes again to exactly 90 seconds:

foobar:/tmp # time /etc/init.d/apache2 restart  redirecting to systemctl restart apache2.service    real    1m30.684s  user    0m0.000s  sys     0m0.000s  

What I've looked for so far:

  • top while apache is restarting doesn't show a lot (~0% usage).
  • netstat also doesn't show any connections with the outside world.

Note that this is a VM which currently has 0 traffic and there are plenty of free GBs available in memory and disk.

I've also found that it's the "stop" part of the "restart" is what takes 90 seconds.

Any idea why this is happening or where should I look at next?

Edit: I found out that when stop takes 90 seconds I consistently get the following in /var/log/apache2/error_log:

[core:notice] [pid 3179] AH00052: child pid 3203 exit signal Segmentation fault (11)  

Why httpd graceful restart takes such a long time?

Posted: 17 Jul 2021 07:23 PM PDT

I am checking /usr/local/apache/logs/error_log

This has happened several times. Sometimes server restart is fast sometimes it's slow. What factor could possibly contribute to this mess.

[Mon Dec 31 21:40:49 2012] [notice] Graceful restart requested, doing restart  [Mon Dec 31 21:40:53 2012] [error] [client 66.249.74.237] File does not exist: /home2/wallpape/public_html/tag  [Mon Dec 31 21:40:53 2012] [error] [client 66.249.74.237] File does not exist: /home2/wallpape/public_html/404.shtml  [Mon Dec 31 21:50:02 2012] [notice] SSL FIPS mode disabled  [Mon Dec 31 21:50:03 2012] [notice] Apache/2.2.23 (Unix) mod_ssl/2.2.23 OpenSSL/1.0.0-fips mod_auth_passthrough/2.1 mod_bwlimited/1.4 FrontPage/5.0.2.2635 configured -- resuming normal operations  

On the other hand ungraceful restart seems to be faster:

[Mon Dec 31 21:52:58 2012] [notice] SIGHUP received.  Attempting to restart  [Mon Dec 31 21:52:58 2012] [notice] SSL FIPS mode disabled  [Mon Dec 31 21:52:58 2012] [notice] Apache/2.2.23 (Unix) mod_ssl/2.2.23 OpenSSL/1.0.0-fips mod_auth_passthrough/2.1 mod_bwlimited/1.4 FrontPage/5.0.2.2635 configured -- resuming normal operations  

From the manual: http://httpd.apache.org/docs/2.2/stopping.html

The parent re-reads its configuration files and re-opens its log files. As each child dies off the parent replaces it with a child from the new generation of the configuration, which begins serving new requests immediately.

It seems that graceful restart is designed so that service can run with no interruption at all. It doesn't work that way though. All domains in my server is death while restarting :(

htaccess rewrite for language subdomains

Posted: 17 Jul 2021 03:02 PM PDT

I need to point subdomains like es.domain.com to /public/www/index.php
The problem is my host does NOT provide me to set a path, I can only set up the subdomains for "local use", which creates the folders in the public directory

My structure is

/public/  /public/de/  /public/es/  /public/it/  /public/www/index.php  

My host told me to use .htaccess files inside the sub domain folders.

I tried, for example in /public/es/ something like

<IfModule mod_rewrite.c>      Options +FollowSymlinks      RewriteEngine On      RewriteBase /      RewriteCond %{HTTP_HOST} ^(de|es|it)\.mydomain\.com$          # Create an environment variable to remember the language:      RewriteRule (.*) - [QSA,E=LANGUAGE:%1]          # Now check if the LANGUAGE is empty (= doesn't exist)      RewriteCond %{ENV:LANGUAGE} !^$          # If so, create the default language (=es):      RewriteRule (.*) - [QSA,E=LANGUAGE:es]          # Change the root folder of the spanish language:      RewriteCond %{ENV:LANGUAGE} ^es$          # Change the root folder:      RewriteRule ^/?$ /public/www/index.php  </IfModule>  

But I am getting a 404 on this:
The requested URL /public/www/index.php was not found on this server.

In my DNS list I see that
es.domain.com CNAME onlinux-it.setupdns.net
while www.domain.com CNAME domain.com

I tried also assigning
es.domain.com CNAME to domain.com
but that did not change anything.

No comments:

Post a Comment