Sunday, June 20, 2021

Recent Questions - Server Fault

Recent Questions - Server Fault


Issue with exporting certain Windows Logs with "sub-Path" with Powershell or CMD

Posted: 20 Jun 2021 09:35 PM PDT

I am making a script that pulls all non-empty logs and saves them as either evtx, csv, or xml. I have got the script working for the logs whose are normal (application, security, etc...), and those that have spaces, however those that have "/" slashes in them (EXAMPLE: Microsoft-Windows-Ntfs/Operational), I keep getting errors. I tried swapping the / out with dash, spaces, abbreviated, and underscores; they all result in the error below. Note: I am am using -newest 20 in the code for testing, to ease the load and save time.

Example of Code (Get the same results with either):

get-eventlog -log "Microsoft-Windows-Ntfs/Operational" -newest 20

OR

$Logname = "Microsoft-Windows-Ntfs/Operational"  get-eventlog -log $logname -newest 20`  

ERROR:

get-eventlog : The event log 'Microsoft-Windows-Ntfs/Operational' on computer '.' does not exist.  At line:1 char:1  + get-eventlog -log "Microsoft-Windows-Ntfs/Operational" -newest 20  + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~      + CategoryInfo          : NotSpecified: (:) [Get-EventLog], InvalidOperationException      + FullyQualifiedErrorId : System.InvalidOperationException,Microsoft.PowerShell.Commands.GetEventLogCommand  

How to connect from Ubuntu VM on Azure to another without uploading the private key

Posted: 20 Jun 2021 08:58 PM PDT

My Topology

Two Ubuntu servers, the edge which is exposed to the internet, and the core which is only connected locally. Both are on the same subnet and the core only accepts SSH from the edge server. SSH Private keys are stored on my local computer I'm connecting from. I'm using a custom ssh port and MFA on both servers

What I want to achieve is to connect from my computer to the edge server via SSH and from there connect to the core server using the private SSH key stored locally.

I'm sure it is quite simple but I have no clue how to achieve that.

Can I clean out the windows cache for a disk without using windows?

Posted: 20 Jun 2021 06:33 PM PDT

I switched from windows 10 to linux mint version 5.4.0-58-generic. I have a boot ssd, which I reformatted, and two hard drives, which I left as is. One hard drive cannot be mounted at all, and the other can only be mounted in read-only mode.

I do not care as much about the first drive, since I can recover the files that I need from it and reformat it, once the second drive has the space to hold those files. So the important issue is how do I mount this second driver in read-write mode? I have tried mounting it:

~$ sudo mount -o rw /dev/sdb2 "/media/ben/eee"  The disk contains an unclean file system (0, 0).  Metadata kept in Windows cache, refused to mount.  Falling back to read-only mount because the NTFS partition is in an  unsafe state. Please resume and shutdown Windows fully (no hibernation  or fast restarting.)  Could not mount read-write, trying read-only  

It's not going to be simple to reinstall windows since I have no spare space - and I cannot write to any other drives. So if at all possible I would like to fix this issue from within linux. fsck does not work, I think since these are both ntfs file systems? lsblk returns this:

NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT  sda      8:0    0 223.6G  0 disk   └─sda1   8:1    0 223.6G  0 part /  sdb      8:16   0   1.8T  0 disk   ├─sdb1   8:17   0   128M  0 part   └─sdb2   8:18   0   1.8T  0 part /media/ben/eee  sdc      8:32   0 931.5G  0 disk   ├─sdc1   8:33   0     1M  0 part   ├─sdc2   8:34   0   127M  0 part   ├─sdc3   8:35   0 730.4G  0 part   ├─sdc4   8:36   0   513M  0 part /boot/efi  └─sdc5   8:37   0 200.6G  0 part   

Configure network to allow clients to connect to a webserver runnign on a VM

Posted: 20 Jun 2021 06:25 PM PDT

Current Design: I currently have a webserver running off of my local machine and it is web accessible with the A record pointing to my global IP (which usually does not change) and my consumer router forwarding all HTTP/HTTPS traffic to my local machine which has a static IP. My machine then has an Apache webserver which serves the correct files depending on the domain name (prod.example.com, stage.example.com, or dev.example.com). Let's ignore the API and the DB for the moment

enter image description here

Proposed Design 1: Now I plan to move this over to a virtual environment (using multipass) with a dedicated VM for every webserver, and I am struggling to understand how to implement this. Should I introduce some intermediate Apache server that routes traffic internally based on the domain name (prod.example.com, stage.example.com, or dev.example.com) like so (basically mimicking the functionality of an SSH Proxy/Bastion server)

enter image description here

Proposed Design 2: I am aware that there is no way to script my consumer grade router every time a new VM is launched, but if there is a way to instruct multipass to assign the same static IP to a VM every time, and if there is a way to make that VM visible to my router (currently it is not), then should I modify my A records to redirect to specific ports and my router to forward those ports, like so (assume my global IP is 59.59.59.01 and my three VMs are on 10.0.0.1-10.0.0.3)

enter image description here

Question: What is one way/the best way to architect a solution for this?

Strange network traffic

Posted: 20 Jun 2021 06:30 PM PDT

Each 10 minutes my server receives traffic from 0.0.0.0 at ~1.7MB/s for about 10 seconds. I checked all services that i use and all cronjobs, i checked journalctl logs too, no info there.

I have no idea why this happen, the same does not happen in any of my other VPS with similar configuration. I just realized this, because my website traffic increased since some time ago and now when this happens "php-fpm active process" goes from ~5 to ~20 active process, and the upstream response time of my webserver is increased.

Screenshot of "iftop -B" command: https://ibb.co/JkL50Ky

Server info:

Virtualization: KVM with latest kernel stable version  OS: Centos 7  

"netstat -tulpn" output:

Active Internet connections (only servers)  Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name      tcp        0      0 0.0.0.0:443             0.0.0.0:*               LISTEN      1132/nginx: master    tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      844/sshd              tcp        0      0 127.0.0.1:3306          0.0.0.0:*               LISTEN      945/mysqld            tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      1132/nginx: master    tcp6       0      0 :::22                   :::*                    LISTEN      844/sshd   

I don't know what else kind of information i should give. I did not found anything about it on google, i also contacted hosting support, they said that could not help. Any help is appreciated.

The only thing i though about was to use iptables to block "receiving" traffic from 0.0.0.0, but i'm not sure if it will work. And i really want to know why this happens.

If I'm receiving traffic from 0.0.0.0 this means that it's traffic from my own server, right? In some way...

Rsync or scp from/to a Docker containerized shell

Posted: 20 Jun 2021 04:51 PM PDT

I want users can access via ssh to a container. Or, more precisely: host users can access to a containerized shell. This could look strange, but it works:

$ cat /etc/passwd | grep staging    staging:x:1001:1001::/home/staging:/usr/local/bin/stagingclish  
$ groups staging    staging : staging docker  
$ cat /usr/local/bin/stagingclish     #!/bin/sh  cd /home/cloud/docker/myproject-staging && docker-compose run --rm --entrypoint=bash php-cli $@  

php-cli is just a custom build from php:7.4-cli image, including some utilities like rsync. Also, /etc/passwd is mounted from host.

I can login with ssh staging@myhost.

I can invoke commands:

ssh staging@myhost ls /    Creating myproject-staging_php-cli_run ...   Creating myproject-staging_php-cli_run ... done  CHANGELOG.md  COPYING.txt  Gruntfile.js.sample  ...  

Now I would like be able to use scp and rsync commands to retrieve/upload files from/to the container.

But:

scp staging@myhost:/var/www/repo/auth.json.sample .    Creating myproject-staging_php-cli_run ...   Creating myproject-staging_php-cli_run ... done  usage: scp [-346BCpqrv] [-c cipher] [-F ssh_config] [-i identity_file]             [-l limit] [-o ssh_option] [-P port] [-S program] source ... target  1  

And

rsync staging@myhost:/var/www/repo/auth.json.sample .    [... rsync help ...]    rsync error: syntax or usage error (code 1) at main.c(1580) [client=3.1.3]  1  rsync: connection unexpectedly closed (0 bytes received so far) [Receiver]  rsync error: error in rsync protocol data stream (code 12) at io.c(228) [Receiver=3.2.3]  

I don't even know if this is possible. Could anyone bring some light?

Best practise: migrating multiple VMs and VHosts to Docker

Posted: 20 Jun 2021 10:15 PM PDT

I currently have about 20 sites and applications hosted in AWS EC2. Some have their own EC2, whilst others share an EC2 with multiple virtual hosts on that EC2.

Each site is completely separate and unrelated from another. The ones which share an EC2 are generally much smaller with little traffic/resource requirement (hence the shared server).

I also have one EC2 server which is simply used to run batch and scheduled tasks alongside the live version of the site, to ensure the live site stays accessible even when the scheduled tasks are heavy.

I am looking to making use of Docker across my whole dev > prod environment for better use of server resources, and easier migrations between environments etc.

I'm keen to get your thoughts on the best practise for production server hardware.

Is it best to use one larger EC2 and have every site as its own docker container on there? This sounds like less server admin, a tidier overall setup, and from what I understand, each docker container still keeps itself to itself from a security point of view. But, any server issues or resource spikes would impact all sites (mitigated by a load balancer).

Or, am I best to keep them split across multiple EC2s, i.e. on EC2 per docker container? This seems completely against the point of docker, but not sure if I'm missing something.

Using a single EC2 for all sites then makes it easier (less admin) to set up load balancers and/or fall over servers too.

Note; if it makes any difference, I use RDS for MySQL; no MySQl running on any EC2s directly.

Thanks in advance

What is the best way to host video content remotely for a VPS website?

Posted: 20 Jun 2021 04:04 PM PDT

I want to run a 2c/4t 4 GB server for my website, I would like it to have videos on it, and I want the videos to appear to look as if they are locally hosted on the site, but want them to either be hosted somewhere that will give me a TB cheap (like Google Drive, Mega, etc) and then have the videos load asynchronously to the page.

(NO Youtube, vimeo, etc I need the files in my possession)

So all website content on the VPS with 80 GB of space, and the 1,400 GB of video files to be stored remotely but appear to be on the website, preferably not a simple embed, but that's fine too.

Bonus: different qualities?, recommendations of where to hold the files? (I have the VPS with OVH), best security practices both in general and copy protection? Any information that would help would be great.

I know Linux, Unix, Windows, OS X Server, Python, C, etc.

I have a few answers to my own question, but I have gotten better answers on these sites than anything I could have even thought of.

Sharepoint deletes by users at times when users are offline

Posted: 20 Jun 2021 03:40 PM PDT

Several users (myself included) have gotten "Heads up!" emails from Sharepoint, saying that a large volume of files have been deleted. The files have been deleted, and it's typically a lot of files, in a 15-minute burst. But it's also happening at times when the user is not online and not logged in. (I can say with some certainty--it happened on my computer at 615a, before I had even turned my computer on for the day).

These 'ghost deletes' are extremely alarming--they are happening for folders and files outside of those typically accessed by users.

Office 365, Exchange license, 34 users (most part-time and semi-active), I'm the company administrator. Not finding anything on Google.

systemd kills my ngrok session started from python

Posted: 20 Jun 2021 02:09 PM PDT

I have a script i wrote that listens on mqtt. When certain code arrives to the mqtt server then an ngrok session is started like so:

subprocess.Popen(['/tmp/ngrok','http' ,'8080'], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)

the scrupt runs in a virtualenv and there for has a shell script to activate the virtual env and run it:

#!/bin/bash  . ./venv/bin/activate  python mqtt_listener.py  

When running this script in my shell with & in the end the ngrok session opens and and is left open nicl untill i kill it myself. However when running in systemd using the following system file (user file) /home/myuser/.config/systemd/user/mqtt_listener.service

[Unit]  Description=mqtt run service  After=default.target    [Service]  Type=exec  ExecStart=/home/myuser/mqtt_listener/run_mqtt_service.sh  KillMode=process    [Install]  WantedBy=default.target  

once the service gets the mqtt command i can see the the journal logs the service got my message and forked it's ngrok process, but then i can see

the service was "succesfully deactivated" and then restarts. the strange thing is that it always happens when i'm not logged in using ssh to the server, if i'm logged in the process will not die. Any idea what's i'm doing wrong ? the type=exec is due to the fact that the others just did not fit.I can't figure out why systemd considers my python service to be done and thus kills it after a grandchild fork (first fork is the run script, which apprently i can get rid of).

Is it possible to block dhcp traffic using iptables?

Posted: 20 Jun 2021 09:27 PM PDT

I have two devices with embedded Linux. One of them (machine A) have two network interfaces: eth that is used to connect machines together and wlan interface to connect to router via WiFi. The second machine (B) has only one eth interface. My goal is to enable access to WiFi networks on the machine B. I used some iptables rules to filter packets from machine A to machine B and it works. Now, I need to block dhcp traffic on the first machine so that it does not reach the second machine. I was looking for some iptables rules to do it but I found that it is impossible with iptables. Is there any other way to block that traffic?

Thank you in advanced for any help.

Unable to connect to minikube ingress via minikube ip

Posted: 20 Jun 2021 07:40 PM PDT

so I just got started digging into minikube after having problems with the docker-desktop here and there. I am following https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/, but I'm running into a problem.

I did exactly what the tutorial explains:

  1. Enable the ingress addon in minikube (Does this work with self-deployed ingress controller installed via helm as well, by the way?)
  2. Deployed an application
  3. Created the corresponding Ingress configuration

I verified all is available, and end up with the following output of kubectl get ingress:

NAME              CLASS    HOSTS              ADDRESS        PORTS   AGE  example-ingress   <none>   hello-world.info   192.168.49.2   80      87m  

However, when trying to access hello-world.info (either that, setup in my hosts file, or 192.168.49.2 directly, just for connectivity testing), I'm getting absolutely nothing besides request timeouts. The logs of my ingress-controller also don't mention any failed connection attempts.

The connection works just fine when manually starting a tunnel via minikube service ingress-nginx-controller-admission --namespace=kube-system.

Here's the output of minikube profile list:

|----------|-----------|---------|--------------|------|---------|---------|-------|  | Profile  | VM Driver | Runtime |      IP      | Port | Version | Status  | Nodes |  |----------|-----------|---------|--------------|------|---------|---------|-------|  | minikube | docker    | docker  | 192.168.49.2 | 8443 | v1.20.2 | Running |     1 |  |----------|-----------|---------|--------------|------|---------|---------|-------|  

I'm running this example on a Windows machine via the docker-desktop runtime.

Where am I going wrong? My ultimate goal is to enable a docker-desktop like experience via my ingress. I don't want to have to manually enable / disable tunnels to access my cluster.

Error when attempting to upload file to Azure VM via SFTP

Posted: 20 Jun 2021 08:01 PM PDT

I receive the following error when attempting to upload certain files to my VM via SFTP:

Network error: Software caused connection abort

I had thought it was related to file type (dll), as it had worked for text, php, and html files, but after further testing it had also failed with zip and png files. I have tried both WinSCP and Filezilla, both generate the same error. I am fairly certain that it is being caused by some security or permission setting within the Azure portal but have no idea where to start looking.

Any suggestions?

Nginx + PHP index.php not found 404

Posted: 20 Jun 2021 10:04 PM PDT

I'm running debian 9 with nginx 12 and php7.1

I've set evrrything up. Nginx does not give me anything in error log, all the PHP scripts are working 100% fine. Nginx indexes index.html as index but DOES NOT find index.php and returns 404, even though I have it set in the nginx config.

Here is my nginx config:

server {      listen 80 default_server;      listen [::]:80 default_server;        # SSL configuration      #      # listen 443 ssl default_server;      # listen [::]:443 ssl default_server;      #      # Note: You should disable gzip for SSL traffic.      # See: https://bugs.debian.org/773332      #      # Read up on ssl_ciphers to ensure a secure configuration.      # See: https://bugs.debian.org/765782      #      # Self signed certs generated by the ssl-cert package      # Don't use them in a production server!      #      # include snippets/snakeoil.conf;        root /var/www/html;        # Add index.php to the list if you are using PHP      index index.php index.html index.htm;        server_name _;      server_tokens off;        location / {              # First attempt to serve request as file, then              # as directory, then fall back to displaying a 404.              try_files $uri /index.html index.php;      }        # pass PHP scripts to FastCGI server      #          location ~ \.php$ {                # With php-fpm (or other unix sockets):              try_files $uri =404;              include fastcgi.conf;              fastcgi_pass unix:/var/run/php/php7.1-fpm.sock;              fastcgi_index index.php;      }        # deny access to .htaccess files, if Apache's document root      # concurs with nginx's one      #      #location ~ /\.ht {      #       deny all;      #}}  

Install Google Seesaw Load Balancer

Posted: 20 Jun 2021 10:04 PM PDT

I have been trying to setup Google Seesaw which is a load balancer based on LVS but have been unsuccessful so far following the directions [on the index.md and /doc/getting_started.md). It is stated quite clearly there is no support so I understand not getting much feedback from the github page however, I would be grateful for any guidance. I have been unable to start the seesaw service but issuing a systemctl status seesaw_watchdog shows 4/5 services running except seesaw_engine and checking the logs seems it doesn't recognize the backend entry in my cluster.pb file despite it being a required field the way I understand it(I simply edited the example found here ). Any pointers appreciated. Thanks.

Permissions prevent file upload in vsftpd

Posted: 20 Jun 2021 08:01 PM PDT

I want to setup vsftpd to allow a user (foouser) to upload and create directories to /var/www/ with the intention of allowing entire webstites to be uploaded.

Current Permissions:

  1. Apache runs at www-data.
  2. document root is: /var/www/
  3. Permissions are www-data:www-data for /var/www (recursively.)

Steps already taken:

Created user: foouser

 useradd foouser  

Added foo user to www-data group.

 usermod -a -G www-data foouser  

Set /var/www/ as foouser's homedir:

 usermod -d /var/www/  

Here's my vsftpd.conf file:

 root@c9e0266eb8c8:/var# cat /etc/vsftpd.conf | grep -v ^#   listen=YES   local_enable=YES   write_enable=YES   local_umask=022   dirmessage_enable=YES   use_localtime=YES   xferlog_enable=YES   connect_from_port_20=YES   chown_uploads=YES   chown_username=www-data   xferlog_file=/var/log/vsftpd.log   xferlog_std_format=YES  

But, I still cannot upload the file:

 Command:   USER foouser   Response:  331 Please specify the password.   Command:   PASS ******   Response:  230 Login successful.   Status:    Server does not support non-ASCII characters.   Status:    Connected   Status:    Starting upload of /home/michael/settings.json   Command:   CWD /var/www   Response:  250 Directory successfully changed.   Command:   TYPE I   Response:  200 Switching to Binary mode.   Command:   PASV   Response:  227 Entering Passive Mode (172,17,0,2,174,22).   Command:   STOR settings.json   Response:  553 Could not create file.   Error: Critical file transfer error  

NOW... if I change the directory permissions from www-data to foouser:foouser, I can upload just fine, but that (of course) breaks apache.

What am I doing wrong?

Edit: Allowing anonymous file upload to /var/www/ would also be fine. This is a docker container, so an insecure practice like that is fine since this will be used for development not production.

systemd service shuts down on its own

Posted: 20 Jun 2021 05:03 PM PDT

I have a problem with this SystemD service:

[Unit] Description=RTC Client Services After=rds.service Requires=rds.service

[Service]  User=USER  Group=GROUP  PermissionsStartOnly=true  RuntimeDirectory=rtc_client  RuntimeDirectoryMode=0770  WorkingDirectory=/usr/lib/systemd/scripts/  Type=forking  ExecStartPre=/bin/mkdir -p /var/run/rtc_client  ExecStartPre=/bin/chown -R USER:GROUP /var/run/rtc_client  ExecStart=/bin/bash rtc_client.sh start  ExecStop=/bin/bash rtc_client.sh stop  Restart=no  PIDFile=/var/run/rtc_client/rtc_client.pid  TimeoutStartSec=0  TimeoutStopSec=30    [Install]  WantedBy=multi-user.target  

The machine boots every morning. The service does the ExecStart but suddenly it stops, as it tries to kill the process PID:

rtc_client.service - RTC Client Services     Loaded: loaded (/usr/lib/systemd/system/rtc_client.service; enabled)     Active: failed (Result: exit-code) since Thu 2016-06-23 06:25:46 CEST; 3h 33min ago    Process: 2754 **ExecStop**=/bin/bash rtc_client.sh stop (code=exited, status=0/SUCCESS)    Process: 1819 **ExecStart**=/bin/bash rtc_client.sh start (code=exited, status=0/SUCCESS)    Process: 1815 ExecStartPre=/bin/chown -R USER:USER /var/run/rtc_client (code=exited, status=0/SUCCESS)    Process: 1813 ExecStartPre=/bin/mkdir -p /var/run/rtc_client (code=exited, status=0/SUCCESS)   Main PID: 1949 (code=exited, status=1/FAILURE)     CGroup: /system.slice/rtc_client.service    Jun 23 06:25:46 zprds60 bash[2754]: Database Connection Information  Jun 23 06:25:46 zprds60 bash[2754]: Database server        = DB2/LINUXZ64 10.5.5  Jun 23 06:25:46 zprds60 bash[2754]: SQL authorization ID   = USER  Jun 23 06:25:46 zprds60 bash[2754]: Local database alias   = DBALIAS  Jun 23 06:25:46 zprds60 bash[2754]: /home/pers5i/.bash_profile: line 97: unalias: vi: not found  Jun 23 06:25:46 zprds60 bash[2754]: USER IS:  root  Jun 23 06:25:46 zprds60 bash[2754]: PID IS:  1949  **Jun 23 06:25:46 zprds60 bash[2754]: rtc_client.sh: line 34: kill: (1949) - No such process**  Jun 23 06:25:46 zprds60 bash[2754]: logout  Jun 23 06:25:46 zprds60 systemd[1]: Unit rtc_client.service entered failed state.  

Here's the script that rtc_client.service launches:

#!/bin/bash    RTCENGINEID=$HOSTNAME'_engine'  RTCUSER='RTCUSER'  RTCPW='RTCPWD'  RTCSERVER='server.example.com'  RTCSERVERPORT='????'  RTCREPOSITORY=https://$RTCSERVER:$RTCSERVERPORT/ccm  WORKDIR='/opt/ibm/buildsystemtoolkit/buildsystem/buildengine/eclipse'  JAVACMD=/opt/ibm/java-s390x-71/jre/bin/java  ARGS="-cp ./plugins/org.eclipse.equinox.launcher_1.1.1.R36x_v20101122_1400.jar org.eclipse.equinox.launcher.Main -application com.ibm.team.build.engine.jazzBuildEngine -repository $RTCREPOSITORY -engineId $RTCENGINEID -userId $RTCUSER -pass $RTCPW"  RTCJAR=org.eclipse.equinox.launcher  PIDFILE='/var/run/rtc_client/rtc_client.pid'  DEBUGLOG='/tmp/rtc_debug.log'    . /home/USER/.bash_profile    start() {            cd $WORKDIR          nohup $JAVACMD $ARGS > $DEBUGLOG &          sleep 5          pgrep -f $RTCJAR > $PIDFILE          echo "USER IS: " $(whoami) | tee -a $DEBUGLOG          echo "PID IS: " $(cat $PIDFILE) | tee -a $DEBUGLOG    }      stop() {            echo "USER IS: " $(whoami) | tee -a $DEBUGLOG          echo "PID IS: " $(cat $PIDFILE) | tee -a $DEBUGLOG          kill $(cat $PIDFILE)          rm -f $PIDFILE    }    restart() {            stop          start    }    reload() {            restart    }    case "$1" in    start)          start          ;;    stop)          stop          ;;    restart)          restart          ;;    reload)          reload          ;;    *)          echo "Usage: $0 {start|stop|restart}"          exit 1  esac    exit $?  

The weird thing is that if I start the service or reboot the machine during the day, the service starts and stays alive...

Loaded: loaded (/usr/lib/systemd/system/rtc_client.service; enabled)     Active: active (running) since Thu 2016-06-23 10:21:44 CEST; 4s ago    Process: 2754 ExecStop=/bin/bash rtc_client.sh stop (code=exited, status=0/SUCCESS)    Process: 38200 ExecStart=/bin/bash rtc_client.sh start (code=exited, status=0/SUCCESS)    Process: 38195 ExecStartPre=/bin/chown -R USER:GROUP /var/run/rtc_client (code=exited, status=0/SUCCESS)    Process: 38194 ExecStartPre=/bin/mkdir -p /var/run/rtc_client (code=exited, status=0/SUCCESS)  

Any help is much appreciated!

BIND resolved IP address into logfile

Posted: 20 Jun 2021 06:04 PM PDT

I have a challenge where the log files are not recording the resolved IP address in the logged information. How is it possible to enable this? So the url and resolved IP address should be in the logfile. Here is the code:

Configuration:  logging {      channel query_log {      file "/var/log/named/query.log";      severity info;  };  category queries { query_log; };  

Current Log file:

04-Nov-2015 08:28:39.261 queries: info: client 192.168.169.122#59319: query: istatic.eshopcomp.com IN A + (10.10.80.50)  04-Nov-2015 08:28:39.269 queries: info: client 192.168.212.136#48872: query: idsync.rlcdn.com IN A + (10.10.80.50)  04-Nov-2015 08:28:39.269 queries: info: client 192.168.19.61#53970: query: 3-courier.sandbox.push.apple.com IN A + (10.10.80.50)  04-Nov-2015 08:28:39.270 queries: info: client 192.168.169.122#59319: query: ajax.googleapis.com IN A + (10.10.80.50) 04-Nov-2015 08:28:39.272 queries: info: client 192.168.251.24#37028: query: um.simpli.fi IN A + (10.10.80.50)  04-Nov-2015 08:28:39.272 queries: info: client 192.168.251.24#37028: query: www.wtp101.com IN A + (10.10.80.50) 04-Nov-2015 08:28:39.273 queries: info: client 192.168.251.24#37028: query: magnetic.t.domdex.com IN A + (10.10.80.50)  04-Nov-2015 08:28:39.273 queries: info: client 172.25.111.175#59612: query: api.smoot.apple.com IN A + (10.10.80.50)  04-Nov-2015 08:28:39.275 queries: info: client 192.168.7.181#45913: query: www.miniclip.com IN A + (10.10.80.50)  

Desired Log file:

.... istatic.eshopcomp.com 205.185.208.26 ....  .... idsync.rlcdn.com 54.84.163.33 ....  .... 3-courier.sandbox.push.apple.com 17.172.232.11  ....  .... ajax.googleapis.com 216.58.223.42 ....  .... um.simpli.fi 158.85.41.203 ....  .... www.wtp101.com 52.70.95.71 ....  .... magnetic.t.domdex.com 54.217.251.207 ....  .... api.smoot.apple.com 17.252.91.246 ....  .... www.miniclip.com 54.230.231.23 ....  

Assistance will be truly appreciated.

Classic asp site, randomly slow DB connection

Posted: 20 Jun 2021 03:06 PM PDT

We are running a site with classic asp and ASP.NET MVC 4 (C#) side by side. During high traffic, database queries are running really slow in the asp pages. At the same time, in the same site, C# pages are always connecting normally to the same DB. CPU, memory and network usage are normal on both servers (powerful hardware/connection).

The site has been running the same setup and traffic load for years without any problems, this behavior started about a week ago. Does anyone know what could be wrong?

DB Server: SQL Server 2012 Web Edition

Web Server: Windows Server 2012 IIS 8.0

Connection string:  conn.connectionString = "Provider=SQLNCLI11;Persist Security Info=True;User ID=abc;Password=abc;Initial Catalog=sampledb;Data Source=192.168.10.11"  

Sample loading times (ms) in iis server log:

2015-09-05 18:00:07  23642  /page.asp  2015-09-05 18:00:07  13547  /page.asp  2015-09-05 18:00:07  93     /ASP.NET  2015-09-05 18:00:07  11172  /page.asp  2015-09-05 18:00:07  78     /ASP.NET  2015-09-05 18:00:07  578    /ASP.NET  2015-09-05 18:00:07  10828  /page.asp  2015-09-05 18:00:07  32252  /page.asp  2015-09-05 18:00:07  13641  /page.asp  

Sometimes, numbers are better for the asp pages:

2015-09-05 18:07:30  218    /page.asp  2015-09-05 18:07:30  3281   /page.asp  2015-09-05 18:07:30  46     /page.asp  2015-09-05 18:07:30  2375   /page.asp  2015-09-05 18:07:30  78     /page.asp  2015-09-05 18:07:30  46     /ASP.NET  2015-09-05 18:07:30  203    /ASP.NET  2015-09-05 18:07:30  2906   /page.asp  2015-09-05 18:07:30  1781   /page.asp  

Asp queries are generally just slow, but sometimes we get an error:

Microsoft SQL Server Native Client 11.0 error '80040e31'  Query timeout expired  

A test .asp page running six identical sql queries, with total page load time in seconds. One query is taking 13 seconds, the other ones are pretty much instant. Next run another query is running slow, sometimes they are all fast.

Query 1: 0  Query 2: 0,3554688  Query 3: 0,375  Query 4: 13,32813  Query 5: 13,32813  Query 6: 13,32813  

Active Directory: Permissions to get Kerberos Service Ticket

Posted: 20 Jun 2021 04:01 PM PDT

I have an Active Directory with a KDC running on Windows Server 2012.

At the moment, every user can request service tickets for every service from the TGS. I'm looking for a solution where the KDC only grants a service ticket for service X if the user is in group Y or something similiar.

Is that possible with Active Directory?

Lockdown Mozilla Thunderbird on Windows Remote Desktop Services

Posted: 20 Jun 2021 07:04 PM PDT

I've installed Mozilla Thunderbird 31.3.0 on a Windows Server 2012 R2 which has the Remote Desktop Services role.

I want to configure the Thunderbird program and set each user's email account etc and then I need to lockdown the program, so that users cannot make any changes.

I've been googling trying to figure out how i can prevent users from making any preferences settings and the only thing related to GPO that I can find it this, but I'm hesitant to use it.

Is there something official from Mozilla on how to do this?

I don't need to be specific about what I lockdown, so I'd be happy to just disable all preferences/settings in one go if that is easier.

UPDATE

Looks like I might have found a starting point here

How can I check the partition name in FreeBSD?

Posted: 20 Jun 2021 09:04 PM PDT

I am currently running my server in the rescue mode, due to the firewall issues. In order to disable the firewall thing I would have to mount the / partition.

My problem is that I dont know/remember what is the partition name to mount. I though that would be the /dev/ada0 (as on my similar server bought at the same time) but there is no such partition:

mount /dev/ada0 /mnt  mount: /dev/ada0: Invalid argument  

OVH web tutorial is saying that its possible to check the partition table via the fdisk -l command - however, wont work on the FreeBSD:

# fdisk -l  fdisk: illegal option -- l  

Is there an other possibility to check the partition table?

Installing SecAst on AsteriskNOW with CentOS

Posted: 20 Jun 2021 04:01 PM PDT

Having some issues installing SecAst for IPS,

Followed the directions up to 2.1.6 and found a way (on this forum) to install qt5-qtbase (thanks) but when I run ldd /usr/local/secast/secast the return is "not a dynamic executable". I unpacked and installed -x86_64-rh6 tarball .. any suggestions?

Also there are directions in 2.1.9 to make a directory structure with /etx/xdg .. is this a typo and should it be /etc/xdg .. /etc/xdg/generationd ? If not where does the directory go under /etc/ ?

Also in /usr/local/secast/ there appears to be a secast file but when secast --help is run return is command not found. Files unpacked with no errors (re-unpacked to be sure), and the color of the font is green.

Thanks

external fact not available at very first puppet run

Posted: 20 Jun 2021 03:06 PM PDT

Introduction:

We are using puppet to configure the nodes via a custom fact which is then referenced in hiera. The fact can either reside in the golden image in /etc/facter/fact.d/ or via pluginsync (makes no difference, tested both)

Versions:

dpkg -l|grep puppet  hi  facter                             1.7.5-1puppetlabs1        amd64        Ruby module for collecting simple facts about a host operating system  hi  hiera                              1.3.4-1puppetlabs1        all          A simple     pluggable Hierarchical Database.  hi  puppet                             3.4.3-1puppetlabs1        all          Centralized configuration management - agent startup and compatibility scripts  hi  puppet-common                      3.4.3-1puppetlabs1        all          Centralized configuration management  

The setup is simple:

Puppetmaster:

cat hiera.yaml  :hierarchy:    - "aws/%{::aws_cluster}"    /etc/puppet/hieradata/aws/web.json  

EC2 Node:

cat /etc/facter/facts.d/ec_cluster.sh  echo 'aws_cluster=web'  

So there is this golden ec2 image including the fact aws_cluster. This is referenced in hiera and specifies the classes and configurations to make.

Problem:

When we boot the instance and enable autosigning the first run will not have the $aws_cluster present on the client side. So it will fail (which makes sense) saying

puppet-agent[2163]: Could not retrieve catalog from remote server: Error 400 on SERVER: Could not find data item classes in any Hiera data file and no default supplied at /etc/puppet/manifests/site.pp:33 on node ip-172-31-35-221.eu-west-1.compute.internal  

When the puppet agent is restarted, everything works as expected. Any hints on this?

Our guess is:

  • has it something to do with certificate generation?
  • what happens the very first run?
  • is it different if we start it by hand /etc/init.d/puppet start than over init?

Update:

when trying to start it over /etc/rc.local it fails too. So there has to be a difference between interactive and non interactive runs. are there special enviroment variables which have to be set?

Accessing OwnCloud over VPN doesn't work on android and chromebook?

Posted: 20 Jun 2021 05:03 PM PDT

Here is my setup. I've got a Cisco ASA 5505 (latest IOS). Behind it, I have a (Ubuntu 12.04) server running nginx, php-fpm, OwnCloud (all latest versions). My desktop also sits behind the ASA and is able to access OwnCloud just fine. If I connect my Android tablet to our wireless access point, then access the OwnCloud web interface, everything works just fine.

I've setup L2TP/IPSEC VPN on the ASA. I can disconnect my ethernet on my desktop, tether to my phone, and connect to the VPN. From there I am able to SSH into the nginx server, VNC into other desktop machines, and access the OwnCloud web interface. Everything works perfect.

I can connect the android tablet to the VPN (via hotspot tethering). From there I am able to SSH into the nginx server, VNC into desktop machines. The problem comes when I try to access the OwnCloud web interface. It doesn't work. It just sits there spinning. The strange thing is, I create a test.php file in the OwnCloud directory (with a simple echo('hello world');) and that page loads just fine.

I have captured traffic on the server using tcpdump, and I can see the GET request come in. The server responds. Then I see a couple of duplicate ACKS coming from the tablet and a few retransmissions coming from the server.

I should note that VPN clients are given IP addresses on a different subnet.

Here is my nginx config:

    upstream php-handler {      server 127.0.0.1:9000;  }    # redirect http to https  server {      listen 80;      server_name 10.3.3.3;      #return 301 https://$server_name$request_uri; # enforce https        root /var/www/owncloud/;        client_max_body_size 10G;      client_body_timeout 600s;      client_header_timeout 600s;        rewrite ^/caldav(.*)$ /remote.php/caldav$1 redirect;      rewrite ^/carddav(.*)$ /remote.php/carddav$1 redirect;      rewrite ^/webdav(.*)$ /remote.php/webdav$1 redirect;        index index.php;      error_page 403 /core/templates/403.php;      error_page 404 /core/templates/404.php;        location = /robots.txt {          allow all;          log_not_found off;          access_log off;      }        location ~ ^/(data|config|\.ht|db_structure\.xml|README) {              deny all;      }        location / {          # The following 2 rules are only needed with webfinger          rewrite ^/.well-known/host-meta /public.php?service=host-meta last;          rewrite ^/.well-known/host-meta.json /public.php?service=host-meta-json last;            rewrite ^/.well-known/carddav /remote.php/carddav/ redirect;          rewrite ^/.well-known/caldav /remote.php/caldav/ redirect;            rewrite ^(/core/doc/[^\/]+/)$ $1/index.html;            try_files $uri $uri/ index.php;      }        location ~ ^(.+?\.php)(/.*)?$ {          try_files $1 = 404;            include fastcgi_params;          fastcgi_param SCRIPT_FILENAME $document_root$1;          fastcgi_param PATH_INFO $2;          fastcgi_param HTTPS off;          fastcgi_pass php-handler;      }        # Optional: set long EXPIRES header on static assets      location ~* ^.+\.(jpg|jpeg|gif|bmp|ico|png|css|js|swf)$ {          expires 30d;          # Optional: Don't log access to assets          access_log off;      }      }    

In summary, all devices work fine when on the local LAN. Desktop clients (OS X) work fine when connected over VPN. VPN Mobile clients (Android tablet) can SSH and VNC into local machines. HTTP requests also work fine for VPN on my simple test page, but are not able to access OwnCloud. What can I do to further diagnose the problem? What is the problem?

NFS mount share from Linux AD authentication to Linux with NIS authentication

Posted: 20 Jun 2021 09:04 PM PDT

I have two machines:

  1. Linux with AD authentication and running NFS server
  2. Linux with NIS authentication

Problem:

When I try to mount any share from first machine (AD authentication) to second (NIS authentication) I always get somehing like this drwxrws---+ 13 16777260 16777222 4096 Sep 21 09:42 software

In fact I can't access to this folder because on NIS machine I don't have the user with such UID/GID

Question:

May somebody know how resolve this problem?

IO-intensive processes hang with iowait, but no activity going on

Posted: 20 Jun 2021 07:04 PM PDT

I have a bunch of IO-intensive jobs, and to boost performance, I just installed two SSDs in a compute server, one as a scratch file system, one as swap. After running for some time, all my processes hang in "D" state, consume no CPU, and the system reports 67% idle, and 33% wait. An iostat shows no disk activity going on, and the system is otherwise responsive, including the relevant file systems. Attaching a 'strace' to the processes produce no output.

Looking in /proc/(pid)/fd, I discover that all processes are using (reading) one common file. I can't see any reason why this should cause a problem, but I replaced the file, killed the processes, and let everything continue (i.e. new processes will be launced). We'll see if things get stuck on the new file, on a different file, or - ideally - not at all :-)

I also found a couple of these in kern.log:

BUG: unable to handle kernel paging request at ffffeb8800096e5c  

Lots of other information, but I don't know how to decipher it - except that it refers to the PID and name of one of my processes.

Any idea what is going on here, or how to fix it? This is on Ubuntu 12.04 LTS, Dell-something box with a RocketRaid disk controller and btrfs file system.

configuring nginx and tomcat together

Posted: 20 Jun 2021 06:04 PM PDT

I am trying to figure out exactly how to configure nginx and tomcat to work together correctly.

Nginx has a worker connections setting and tomcat has max threads (assuming native apr connector for tomcat). Since nginx is connecting with HTTP/1.0 to backend, keepalive is not needed for tomcat.

I set keep-alive timeout to 30s in nginx. If 100 req/s is the target and each request finishes in 1s, there can be 100 requests * 30 seconds each = 3000 concurrent connections that can be opened to nginx and there will be 100 concurrent connections to tomcat.

So if I set worker connections to 6000 in nginx (worker process is 1, and nginx consumes 2 connection per request I think. One for client and one for backend) and max threads to 100 in tomcat (which is already 200 by default), this will work.

Is there any conceptual problem in this calculation? The exact numbers do not matter.

Thanks.

Turning off cp (copy) command's interactive mode (cp : overwrite ?)

Posted: 20 Jun 2021 03:44 PM PDT

Does anyone know how I would turn off the interactive mode when using cp?

I am trying to copy a directory recursively into another and for each file that is getting overwritten I have to answer 'y'.

The command I am using is:

cp -r /usr/share/drupal-update/* /usr/share/drupal  

But I get asked to confirm each overwrite:

cp: overwrite `./CHANGELOG.txt'? y    cp: overwrite `./COPYRIGHT.txt'? y    cp: overwrite `./INSTALL.mysql.txt'? y    cp: overwrite `./INSTALL.pgsql.txt'? y    ...  

I am using ubuntu server version jaunty.
Thanks!

No comments:

Post a Comment