Saturday, August 7, 2021

Recent Questions - Server Fault

Recent Questions - Server Fault


How do online games send UDP packets across the internet?

Posted: 07 Aug 2021 10:10 PM PDT

How do online multiplayer games which use UDP get the packets delivered between networks over the internet? From what I understand, clients would have to enable port forwarding on their routers in order for the packets to arrive at their computer. Is this what big online games (WoW, Diablo, etc) require players to do?

For example, I recently created a server that handles udp traffic. It just echos back whatever a sender has sent. I deployed this to a server on the internet. I can only get the echos back to the sender after enabling port forwarding, but this will not work if there are two senders on the same local network.

Aurora MySQL - How Much IOPS Am I Using?

Posted: 07 Aug 2021 06:07 PM PDT

I am testing Aurora MySQL as a possible replacement to Aurora Postgres. When bulk loading data into Aurora Postgres, it's very easy to see how mush Postgres is going to cost just by looking at the IOPS on a bulk load.

40k

But MySQL does not have a similar metric (Write IOPS). The only metric that looks comparable is Write Throughput. But after looking 800k rows into my MySQL DB I get nothing:

flatline

It's not possible here that i've had an average of 0.5 inserts/s, because I just loaded it with 800k rows.

My method for loading MySQL is a LOAD DATA FROM S3 FILE statement if it makes a difference.

How can I see my consumed Aurora I/O?

How can I know how much this is going to cost if I go ahead and load another 300mm rows into this table?

Is Insert Throughput and Select Throughput equivalent to IOPS?

Bad imaging using sccm

Posted: 07 Aug 2021 05:57 PM PDT

If I am imaging new machines straight from the factory, I have no issues. Barring hardware or network malfunctions, imaging is working nearly 100%. However, once in a while certain packages fail to install and they have to be restaged and reimaged. Question 1 is why do certain packages fail, but only sometimes? When I remove these failed machines from oneAD and reset the pxe boot flag I have a very high rate of machines failing to pxe boot. So question 2 is, am I missing something when I try to restage?

isolating a Liquid Web "cloud dedicated" server during reimage

Posted: 07 Aug 2021 05:52 PM PDT

Might be a long shot but I'm hoping the group mind has an answer to this Liquid Web conundrum:

We have two "Cloud Dedicated" servers with Liquid Web. We've taken an image (including a ton of application data) of our live server A and want to restore it on server B as a base, then reconfigure B as a warm spare.

But when B comes up after the re-image, we don't want it sending out duplicate or bogus e-mail to users that might have been spooled on A when the image was taken, or that might be triggered by cron jobs, etc., running on the now somewhat stale data.

So the issue is controlling the server after a re-image, such that we can either stop outgoing SMTP connections, or immediately turn off the mail server.

If I had a physical server in front of me, I'd just bring it up in single user mode, edit the systemd config to turn off postfix, easy peasy. So first I thought we might be able to do that, bring the virtualized server up in single user mode and configure it through the virtual console in the management interface. We're told that's not possible.

It was suggested that we could use LW's "advanced firewall" to turn off SMTP connections. But their so-called "advanced" firewall can only control incoming connections :-/ and we want to be able to turn off outgoing SMTP connections.

We've asked if they could turn off outgoing connections at the closest router, just drop packets from that IP with (only) the SYN flag set. They say there's no way to do this. I find this surprising, but.

Ok, I thought, maybe we can live with it, if I can control when the server boots and get in quickly enough to prevent more than a few unwanted messages from getting out. No, turns out that the server will automatically boot after being re-imaged, we can't even control that. I'd have to sit and watch it for some unknown time (hours? it's a big image) as the image loaded then jump in when it booted, not practical

Any ideas? There has to be some way of booting a server under more controlled conditions!

I'm wondering if it's possible for them to temporarily set DHCP so that the server isn't given a routable address when it comes up but is still accessible from the console in the management interface? I've asked that in the most recent ticket but gotten no reply.

How to start containerd as a service after yum install?

Posted: 07 Aug 2021 04:38 PM PDT

I installed containerd on Amazon Linux 2 using the suggested commands:

sudo amazon-linux-extras enable docker  sudo yum install -y containerd  

I added this in the EC2 user data script to run at instance launch time.

But, how am I supposed to start containerd (a container runtime - similar to docker) as a service? Since I installed through yum there doesn't seem to include a systemd service file. The binary is located on /usr/bin/containerd. Am I supposed to use echo in the boot script to generate a systemd service file or what is a good practice?

Upgrade from centoS 6.10 to what OS for OTRS/Znuny?

Posted: 07 Aug 2021 06:34 PM PDT

I have a centOS 6.10 installation which needs a upgrade to a newer CentOS. But since CentOS support will be dropped for LTS I have now to decide to which path to switch.

I need long term support for installed linux package regarding security updates. And CentOS did a good job on this.

Can anyone tell me, what the best distro alternative for OTRS/Znuny to use? Rocket Linux? Alma Linux or maybe even something else?

Change Nginx proxy pass public path

Posted: 07 Aug 2021 08:54 PM PDT

I have a Python/Django API with a unique endpoint /videos running on my Debian server.

The Nginx vhost looks like this:

server {        server_name example.com;        location / {          # Pass to Uvicorn/Gunicorn web server service          proxy_pass http://upstream_name/;          proxy_set_header Host $host;          proxy_set_header X-Real-IP $remote_addr;          proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;          proxy_set_header X-Forwarded-Proto $scheme;      }        listen 443 ssl; # managed by Certbot      ssl_certificate /path_to/fullchain.pem; # managed by Certbot      ssl_certificate_key /path_to/privkey.pem; # managed by Certbot      include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot      ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot    }    upstream upstream_name {      server 127.0.0.1:8002;  }  

Thus, it successfully serve the app and its unique endpoint on https://example.com/videos.

Now, I would like to serve the app on https://example.com/my_app/videos, in order to have in the future a other apps served on the same domain/vhost (with different internal ports, different upstreams in the vhost of course).

I've been reading several similar Q/A on ServerFault, and have been trying changing location / to location /my_app, while trying different trailing slashes configs on location and proxy_pass, with no success. What am I missing here?

EDIT: More precisely:

  • With the vhost changed to location /myapp -> https://example.com/my_app/videos displays a Not Found error (not from Nginx)

  • With the vhost changed to location /my_app/ -> https://example.com/my_app/videos get redirected to https://example.com/videos/ and displays a 404 Not Found error (from Nginx)

Same Domain - 2 servers - Ping test from one to the other

Posted: 07 Aug 2021 04:48 PM PDT

I have two Genesys servers in the same domain (one for primary and one for backup) why would we have to enter the FQN to ping from one server to the next? shouldn't I be able to enter ping abc-01 and not have to enter abc-01@xyz.com ?

Dell Compellent clock skew too great

Posted: 07 Aug 2021 06:05 PM PDT

I received this alert from our Dell Compellent SC200: " Alert created on controller '12345' for object [internal ref: 'Global '] - [ProgressionTime]: Some Data Progression features, such as RAID Rebalance, may not run properly because the controller's clock skew is too great. Configure time settings to resolve. "

error message

In the "Unisphere for SC Series" web interface, I found the "Time Settings" section where I can change the ntp server:

time settings

For those who have resolved this error before:

  1. Is this the correct setting to address the issue?
  2. Will updating the ntp server incur any downtime?

Thank you.

Nginx is working but : nginx.service: Can't open PID file on debian 10

Posted: 07 Aug 2021 09:44 PM PDT

I'm using nginx 1.20.1 on Debian 10, it's working but on command systemctl status nginx this is shown:

systemd[1]: Starting nginx - high performance web server...
systemd[1]: nginx.service: Can't open PID file /run/nginx.pid (yet?) after start: No such file or d
systemd[1]: Started nginx - high performance web server.

I googled alot and checked permission of related folders of address /var/usr/nginx.pid and checked the address to be the same in both /etc/nginx/nginx.conf and /usr/lib/systemd/system/nginx.service , and good to say that nginx.pid is created when nginx is running and it is deleted when I stop nginx. I tried using /usr/nginx.pid in .conf and .service but the same problem exists. In my error.log, no error or warning is logged. Also should mention that it's a fresh VM machine|debian and a fresh nginx! With no extra modification to default nginx.conf !

Question: should I have notice Can't open PID file /run/nginx.pid (yet?) after start: No such file or d ? If it is important how could I resolve it?

How to setup Mosquitto MQTT Broker in Kubernetes

Posted: 07 Aug 2021 06:03 PM PDT

I have been trying to set up ChirpStack in a Kubernetes space, but it doesn't seem to be working for me, and I can't find any resources online that have been the solution.

chirpstack-application-server-6d6f8d699c-nlrmx 1/1 Running 0 44s
chirpstack-gateway-bridge-5454b7f9f-fm5wl 1/1 Running 0 73s
chirpstack-mosquitto-646899d74d-d7bhl 0/1 CrashLoopBackOff 3 85s
chirpstack-network-server-66cdf9bdf7-rhzg5 1/1 Running 0 55s

Above is every pod I have atm. App-server, net-server, gateway-bridge all spin up and run, however the Mosquitto broker moves to 'Complete' and goes right into the CrashLoopBackOff. I have figured it might be something to do with a lack of config, so I've spent a few days putting together the mosquitto.conf file with "allow_anonymous true" hoping to get a connection from any of my ChirpStack components, but the logs just indicate an mqtt connection refused error.

output of kubectl logs chirpstack-application-server

time="2020-12-10T15:01:41Z" level=error msg="integration/mqtt: connecting to broker error, will retry in 2s: Network Error : dial tcp 10.244.146.236:1883: i/o timeout"

Because no connection could be made, I assumed it was the opposite and I needed to add in the password_file and make allow_anonymous false. Below is my current config if anyone might have an idea what is wrong.

configMap-1.yml

kind: ConfigMap  metadata:    name: mosquitto-password    namespace: ****    labels:      app: chirpstack-mosquitto    data:    password_file.txt: |      admin:admin      user:user      app-server:app-server      net-server:net-server      gateway-bridge:gateway-bridge  

configMap.yml

kind: ConfigMap  metadata:    name: mosquitto-config    namespace: ****    labels:      app: chirpstack-mosquitto    data:    mosquitto.conf: |          persistence true      persistence_location /mosquitto/data/      # per_listener_settings false      log_dest stdout      # listener 1886      listener 1883      protocol mqtt      # Defaults to false, unless there are no listeners defined in the configuration      # file, in which case it is set to true, but connections are only allowed from      # the local machine.      allow_anonymous false      password_file /.config/mosquitto/auth/password_file.txt      #    cafile: /mosquitto/config/certs/ca.crt      #    certfile: /mosquitto/config/certs/server.crt      #    keyfile: /mosquitto/config/certs/server.key      require_certificate false      use_identity_as_username false  

deployment.yml

kind: Deployment  metadata:    name: chirpstack-mosquitto    namespace: ****    spec:    replicas: 1    selector:      matchLabels:        app: chirpstack-mosquitto    template:      metadata:        labels:          app: chirpstack-mosquitto      spec:        containers:        - name: chirpstack-mosquitto          image: ****/chirpstack/eclipse-mosquitto:1.6.12          ports:          - containerPort: 1883          volumeMounts:          - name: password-file            mountPath: /.config/mosquitto/auth/password_file.txt            subPath: password_file.txt          - name: mosquitto-data            mountPath: /mosquitto/data          - name: mosquitto-log            mountPath: /mosquitto/log          - name: config-file            mountPath: /.config/mosquitto/mosquitto.conf            subPath: mosquitto.conf                 securityContext:          runAsNonRoot: true          fsGroup: 1          runAsGroup: 1000          runAsUser: 1000          supplementalGroups:          - 1            volumes:        - name: config-file          configMap:            name: mosquitto-config        - name: password-file          configMap:            name: mosquitto-password                - name: mosquitto-data          emptyDir: {}        - name: mosquitto-log          emptyDir: {}     

service.yml

kind: Service  metadata:    name: chirpstack-mosquitto    namespace: 186215-poc    spec:    type: ClusterIP    ports:      - name: mqtt         port: 1883        targetPort: 1883        protocol: TCP      selector:      app: chirpstack-mosquitto      

NGINX redirecting the wrong subdomains

Posted: 07 Aug 2021 03:13 PM PDT

I have setup my NGINX to only accept HTTPS traffic on port 443 and I want to redirect all non-HTTPS traffic from port 80 to HTTPS.

I also have multiple subdomains I want to manage independently.

I'm going to post an example from my configuration but will omit the boring stuff.

The main website that regular user should be able to browse:

server  {      listen 443 ssl;      listen [::]:443 ssl;        server_name www.myserver.com;        root /var/www/www.myserver.com;        index index.php index.html index.htm;  }  

One of the subdomains:

server  {      listen 443 ssl;      listen [::]:443 ssl;        server_name subdomain.myserver.com;        location /      {          proxy_pass https://127.0.0.1:8500;      }  }  

And now I want to redirect traffic from port 80 to HTTPS:

server  {      listen 80;      listen [::]:80;        server_name subdomain.myserver.com;        return 301 https://subdomain.myserver.com$request_uri;  }  

The Problem: ALL subdomains are automatically being redirected to "https://subdomain.myserver.com", even if they do not match the server name specified in the redirect block.

"http://www.myserver.com" (for which there is no config block) will get redirected to "https://subdomain.myserver.com" even though it doesn't match the server_name

EC2: Creating pem files for external users

Posted: 07 Aug 2021 04:56 PM PDT

I'm fairly new to this. I'm running a bunch of EC2 machines and when creating my AWS Account i got my own .pem file in order to connect to my machines for which I have full access rights, etc.

As I'm working with freelancers and developers I want to give them full access rights for a specific instance without of course sharing my very own .pem file.

What is the easiest and best/pragmatic way to do that. What are the steps and are the freelance developers then also be able to fully connect to the machine with read/write access to everything on this instance?

Thanks for your feedback in advance, Matt

ansible: run task multiple times on the same host, using variables from another

Posted: 07 Aug 2021 09:35 PM PDT

New to ansible, can't find a reference for my issue which does not seem so rare..

I have two hosts under the same group, each of them with its variables, say:

[myHosts]  host1  a=1  b=10  host2  a=2  b=20  

Now, I have a task which needs to be executed twice on host1 only, once with host1's variables, and the second time with the value of a from host1, and b from host2. If I write it this way:

- role: my_role        vars:          a_val: {{ a }}          b_val: {{ b }}        loop: "{{ groups['myHosts'] }}"        when: inventory_hostname in groups['myHosts'][0]  

I get a_val and b_val populated with host1 values only (which is fine for a_val, not for b_val).

I know there seems to be no reason why I could just call the same task twice referencing the proper value of b with some ansible magic vars (hostvars[groups['myHosts'][1]]['b'] would do the trick for example), but the hosts could be 10 tomorrow and that would be annoying (in that case, the when condition would still be fine, since everything will be executed always there).

How can I generalize to have b_val populated with the proper value?

ESXi 6.7 Connect two VM's on ESXi and connect them using router

Posted: 07 Aug 2021 05:07 PM PDT

I have an ESXi host installed on a VMware workstation.
On the ESXi, I have two Ubuntu virtual machines and 1 freesco router.

I want to connect both the Ubuntu virtual machines via the router. The virtual machines are on different networks. I've set the static IP's of both the virtual machines so that they are on a different network.

Freesco router config:

link

ESXi host:

link

VM1 Static IP: 192.168.204.2

VM2 Static IP: 10.10.10.2

Switch topology: link

VM1 settings: link

Portgroups: link

Both the VM's are connected to same portgroup i.e "connect" and same switch "newSwitch".

I want to connect both the VM's using router. How can I do this?

How to change default of max open files per process?

Posted: 07 Aug 2021 08:05 PM PDT

I changed the max open files to 20000. However I'm still running into limits and I found that there is a per process limit. I would like to know how to change this the default per process limit too?

ubuntu@ip-172-16-137-139:~$ cat /proc/1237/limits  Limit                     Soft Limit           Hard Limit           Units       Max cpu time              unlimited            unlimited            seconds     Max file size             unlimited            unlimited            bytes       Max data size             unlimited            unlimited            bytes       Max stack size            8388608              unlimited            bytes       Max core file size        0                    unlimited            bytes       Max resident set          unlimited            unlimited            bytes       Max processes             31538                31538                processes   Max open files            1024                 4096                 files  Max locked memory         65536                65536                bytes       Max address space         unlimited            unlimited            bytes       Max file locks            unlimited            unlimited            locks       Max pending signals       31538                31538                signals     Max msgqueue size         819200               819200               bytes       Max nice priority         0                    0                      Max realtime priority     0                    0                      Max realtime timeout      unlimited            unlimited            us            ubuntu@ip-172-16-137-139:~$ ulimit -a  core file size          (blocks, -c) 0  data seg size           (kbytes, -d) unlimited  scheduling priority             (-e) 0  file size               (blocks, -f) unlimited  pending signals                 (-i) 31538  max locked memory       (kbytes, -l) 64  max memory size         (kbytes, -m) unlimited  open files                      (-n) 20000               <- changed this  pipe size            (512 bytes, -p) 8  POSIX message queues     (bytes, -q) 819200  real-time priority              (-r) 0  stack size              (kbytes, -s) 8192  cpu time               (seconds, -t) unlimited  max user processes              (-u) 31538  virtual memory          (kbytes, -v) unlimited  file locks                      (-x) unlimited  

IIS returning 404 on PDF File

Posted: 07 Aug 2021 04:04 PM PDT

I have a IIS 10.0 server that everything is working fine, with one issue. Any .pdf file returns 404. I know permissions are correct as all the image files in the same folder are working fine.

The PDF mime type exists in both the IIS root and the site and there is no Request filtering set.

Most the results on the web are for an older version of IIS, so I am out of ideas. Anyone else run into this?

Multiple Subdomains pointing to the same public IP and port

Posted: 07 Aug 2021 09:43 PM PDT

I would like to explain the scenario that we have then ask the question:

we have the domain:

  • www.example.com

and the following sub domains:

  • forum.example.com
  • portal.example.com
  • crm.example.com

and the following applications which are hosted on separate servers:

  • forum - Server IP: 1.1.1.1, port: 1010
  • portal - Server IP: 2.2.2.2, port: 2020
  • crm - Server IP: 3.3.3.3, port: 3030

all the server are running behind a firewall

on the other hand we have only one public IP address:

10.10.10.10

so to link the local servers with the public ip we can create virtual hosting entries in the firewall and create public ports as following:

  • 10.10.10.10:1010 will point to 1.1.1.1:1010
  • 10.10.10.10:2020 will point to 2.2.2.2:2020
  • 10.10.10.10:3030 will point to 3.3.3.3:3030

then we will configure the following A NAME record in the DNS ZONE:

  • www.example.com - 10.10.10.10

so to go to the forum application, the user have to type:

  • www.example.com:1010

and so on:

  • www.example.com:2020 for portal
  • www.example.com:3030 for crm

now, instead of using the port number we would like to use the sub domain, for example, if the user want to go to the forum he will just type:

  • forum.example.com

and same for the other applications.

Is this possible to be done without purchasing new public IP addresses for each application?

Sorry for the long post. Thanks

Nginx HTTP2 IOS 11 not working

Posted: 07 Aug 2021 10:09 PM PDT

i have problems with HTTP2 protocol on my NGINX server, this is my configuration

listen 443 ssl http2;  server_name adomain.com;  root /var/www/project;    limit_req   zone=one  burst=60 nodelay;    add_header Strict-Transport-Security "max-age=2592000; includeSubdomains;" always;  ssl_certificate     /etc/letsencrypt/live/fullchain.pem;  ssl_certificate_key /etc/letsencrypt/live/privkey.pem;  ssl_protocols   TLSv1 TLSv1.1 TLSv1.2;  ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:ECDHE-RSA-AES128-GCM-SHA256:AES256+EECDH:DHE-RSA-AES128-GCM-SHA256:AES256+EDH:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:DES-CBC3-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4";  ssl_prefer_server_ciphers on;  ssl_session_cache   shared:SSL:10m;  ssl_session_timeout 10m;  ssl_dhparam /etc/nginx/ssl/dhparam.pem;    resolver 8.8.8.8;  ssl_stapling on;  ssl_stapling_verify on;    keepalive_timeout   70;  

I can't see the error on my iOS device (safari 11), it's very strange the webpage is a SPA ( angular ) that app makes requests to an API, the apps loads over HTTP2 but when the app has to make requests to the API it fails, disabling HTTP2 from the listen makes everything works as espected

The ciphers for both servers frontend/backend are the same

In Chrome/Firefox/IE works fine, i don't know what is wrong with Safari or my server config

The error.log and adomain-error.log are empty when Safari fails

Nginx Version

nginx version: nginx/1.12.2  built by gcc 4.8.5 20150623 (Red Hat 4.8.5-16) (GCC)   built with OpenSSL 1.0.2k-fips  26 Jan 2017  TLS SNI support enabled  configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib64/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-compat --with-file-aio --with-threads --with-http_addition_module --with-http_auth_request_module --with-http_dav_module --with-http_flv_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_mp4_module --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module --with-http_slice_module --with-http_ssl_module --with-http_stub_status_module --with-http_sub_module --with-http_v2_module --with-mail --with-mail_ssl_module --with-stream --with-stream_realip_module --with-stream_ssl_module --with-stream_ssl_preread_module --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -fPIC' --with-ld-opt='-Wl,-z,relro -Wl,-z,now -pie'  

UPDATE

The console on my IPhone says Protocol error so i'm pretty sure that it's an error of the IOS 11

UPDATE 2

I have found this post

https://www.nginx.com/blog/http2-theory-and-practice-in-nginx-stable-13/  

It explains that if you support TLSv < 1.2 you will end up in a PROTOCOL ERROR , leaving in my server config just TLSv1.2 makes the app works again, but it's buggy , some requests will fail ... that's beyond my comprehension, once again in Chrome/Firefox it's working but in my mobile safari it doesn't

UPDATE 3 [2019/02/28]

There was a bug on our NGINX config for the OPTIONS Method of a CORS request causing duplicated Content-Length and Content-Type headers to be responded, after we solve that the app started working fine in HTTP/2, we also changed the status of the OPTIONS response from 200 to 204

debian- [ERROR] MySQL server not working "Unit mysql.service entered failed state."

Posted: 07 Aug 2021 04:04 PM PDT

This my first time setting a MySQL server on debian. I was trying to install Joomla! using this tutorial: docs.joomla.org/Installing_Joomla_on_Debian_Linux

I ended up installing and everything went fine until the demo ended. Since then for some reason mysql stopped working.

I did quite a mess there trying to fix the issue by myself so i figured i would ask for some help.

Also I got the error Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) for a while and it changed when edited my.cnf that way. (socket used to be at /opt/lampp/var/mysql/)

[mysqld]  user = mysql  port=3306  socket          = /var/lib/mysql/mysql.sock  

So here is the error i got (details with systemctl status mysql.service):

    mysql.service - MySQL Community Server     Loaded: loaded (/lib/systemd/system/mysql.service; enabled)     Active: activating (start) since lun. 2016-08-22 11:09:35 CEST; 2s ago    Process: 25694 ExecStartPre=/usr/share/mysql/mysql-systemd-start pre (code=exited, status=0/SUCCESS)    Control: 25698 (mysqld)     CGroup: /system.slice/mysql.service             └─25698 /usr/sbin/mysqld --daemonize --pid-file=/var/run/mysqld/mysqld.pid  

And the "journalctl -xn" result:

    -- Logs begin at lun. 2016-08-22 10:15:10 CEST, end at lun. 2016-08-22 11:11:03 CEST. --  août 22 11:10:55 tarte systemd[1]: Unit mysql.service entered failed state.  août 22 11:10:58 tarte systemd[1]: mysql.service: control process exited, code=exited status=2  août 22 11:10:58 tarte systemd[1]: Failed to start MySQL Community Server.  -- Subject: L'unité (unit) mysql.service a échoué  -- Defined-By: systemd  -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel  

Finally here is my /etc/mysql/my.cnf (part of it at least):

 The following options will be passed to all MySQL clients  [client]  #password       = your_password  port            = 3306  socket          = /opt/lampp/var/mysql/mysql.sock    # Here follows entries for some specific programs    # The MySQL server  [mysqld]  user = mysql  port=3306  socket          = /var/lib/mysql/mysql.sock  skip-external-locking  key_buffer = 16M  max_allowed_packet = 1M  table_open_cache = 64  sort_buffer_size = 512K  net_buffer_length = 8K  read_buffer_size = 256K  read_rnd_buffer_size = 512K  myisam_sort_buffer_size = 8M  

I have no clue has what i should do now, ask for more info if necessary. Be specific please since i have a hard time understanding all this.

HTTPS access to phpmyadmin DOWNLOADS INDEX.php instead of opening phpmyadmin

Posted: 07 Aug 2021 05:07 PM PDT

When I open phpmyadmin without encription, everything goes fine. however, if I switch to https, the browser downloads the index file.

apache 000-default.conf as

<VirtualHost *:80>      ServerAdmin webmaster@localhost      DocumentRoot /var/www/html      ErrorLog ${APACHE_LOG_DIR}/error.log      CustomLog ${APACHE_LOG_DIR}/access.log combined  </VirtualHost>  ###### I tried with and without the next group: with no luck  <VirtualHost *:443>      ServerAdmin webmaster@localhost      ServerName my.server:443      DocumentRoot /var/www/html      SSLEngine on      SSLCertificateFile  /etc/ssl/certs/ssl-cert-snakeoil.pem      SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key      ErrorLog ${APACHE_LOG_DIR}/error.log      CustomLog ${APACHE_LOG_DIR}/access.log combined  </VirtualHost>  

my apache.conf is de default one (installed by phpmyadmin) - the part that concerns goes here: Alias /phpyadmin /usr/share/phpmyadmin

<Directory /usr/share/phpmyadmin>      Options FollowSymLinks      DirectoryIndex index.php        <IfModule mod_php5.c>          <IfModule mod_mime.c>              AddType application/x-httpd-php .php          </IfModule>          <FilesMatch ".+\.php$">              SetHandler application/x-httpd-php          </FilesMatch>            php_flag magic_quotes_gpc Off          php_flag track_vars On          php_flag register_globals Off          php_admin_flag allow_url_fopen Off          php_value include_path .          php_admin_value upload_tmp_dir /var/lib/phpmyadmin/tmp          php_admin_value open_basedir /usr/share/phpmyadmin/:/etc/phpmyadmin/:/var/lib/phpmyadmin/:/usr/share/php/php-gettext/:/usr/share/javascript/:/usr/share/php/tcpdf/      </IfModule>  </Directory>  

BTW, I have virtualmin on the server but I am installing the phpmyadmin at the root (so should work as a normal instalation, right?)

Replication issue on all 3 domain controllers

Posted: 07 Aug 2021 06:03 PM PDT

We have 3 domain controllers in server 2012. Replication is failing miserably.

Repadmin /replsummary

dc1 rpc server not available

dc 2 rpc server not available

dc 3 "Insufficient attributes were given to create an object"

I wll be coming with dcdiag output soon.

But one thing still stuck in my mind is that during dcdiag outputs were this dc is not advertising as time servr when the time configuration is correct on pdc and dc's

Replsummary and Dcdiag C:\Users\admin>

repadmin /replsummary  Replication Summary Start Time: 2016-01-13 09:58:55    Beginning data collection for replication summary, this may take awhile:    ......      Source DSA          largest delta    fails/total %%   error  DC01                  26m:42s    0 /  10    0  DDC                   26m:47s    0 /  10    0   DC02          03h:58m:50s   10 /  10  100  (1722) The RPC server is una  vailable.  

C:\Users\admin>

repadmin /replsummary  Replication Summary Start Time: 2016-01-13 09:58:55    Beginning data collection for replication summary, this may take awhile:    ......      Source DSA          largest delta    fails/total %%   error   DC01                  26m:42s    0 /  10    0   DDC                   26m:47s    0 /  10    0   DC02          03h:58m:50s   10 /  10  100  (1722) The RPC server is una  vailable.            C:\Users\admin>dcdiag    Directory Server Diagnosis    Performing initial setup:     Trying to find home server...     Home Server = DC02     * Identified AD Forest.     Done gathering initial info.    Doing initial required tests       Testing server: -Irving\DC02        Starting test: Connectivity           ......................... DC02 passed test Connectivity    Doing primary tests       Testing server: Irving\DC02        Starting test: Advertising           Warning: DC02 is not advertising as a time server.           ......................... DC02 failed test Advertising        Starting test: FrsEvent           ......................... DC02 passed test FrsEvent        Starting test: DFSREvent           There are warning or error events within the last 24 hours after the           SYSVOL has been shared.  Failing SYSVOL replication problems may cause           Group Policy problems.           ......................... DC02 passed test DFSREvent        Starting test: SysVolCheck           ......................... DC02 passed test SysVolCheck        Starting test: KccEvent           ......................... DC02 passed test KccEvent        Starting test: KnowsOfRoleHolders           ......................... DC02 passed test KnowsOfRoleHolders        Starting test: MachineAccount           ......................... DC02 passed test MachineAccount        Starting test: NCSecDesc           ......................... DC02 passed test NCSecDesc        Starting test: NetLogons           [DC02] User credentials does not have permission to perform           this operation.           The account used for this test must have network logon privileges           for this machine's domain.           ......................... DC02 failed test NetLogons        Starting test: ObjectsReplicated           ......................... DC02 passed test ObjectsReplicated        Starting test: Replications           [Replications Check,DC02] DsReplicaGetInfo(PENDING_OPS, NULL)           failed, error 0x2105 "Replication access was denied."           ......................... DC02 failed test Replications        Starting test: RidManager           ......................... DC02 passed test RidManager        Starting test: Services              Could not open NTDS Service on DC02, error 0x5              "Access is denied."           ......................... DC02 failed test Services        Starting test: SystemLog           ......................... DC02 passed test SystemLog        Starting test: VerifyReferences           ......................... DC02 passed test VerifyReferences         Running partition tests on : ForestDnsZones        Starting test: CheckSDRefDom           ......................... ForestDnsZones passed test CheckSDRefDom        Starting test: CrossRefValidation           ......................... ForestDnsZones passed test           CrossRefValidation       Running partition tests on : DomainDnsZones        Starting test: CheckSDRefDom           ......................... DomainDnsZones passed test CheckSDRefDom        Starting test: CrossRefValidation           ......................... DomainDnsZones passed test           CrossRefValidation       Running partition tests on : Schema        Starting test: CheckSDRefDom           ......................... Schema passed test CheckSDRefDom        Starting test: CrossRefValidation           ......................... Schema passed test CrossRefValidation       Running partition tests on : Configuration        Starting test: CheckSDRefDom           ......................... Configuration passed test CheckSDRefDom        Starting test: CrossRefValidation           ......................... Configuration passed test CrossRefValidation       Running partition tests on : ssd        Starting test: CheckSDRefDom           ......................... ssd passed test CheckSDRefDom        Starting test: CrossRefValidation           ......................... ssd passed test CrossRefValidation       Running enterprise tests on : ssd.local        Starting test: LocatorCheck           ......................... ssd.local passed test LocatorCheck        Starting test: Intersite           ......................... ssd.local passed test Intersite  

Unzip from stdin to stdout - funzip, python

Posted: 07 Aug 2021 09:37 PM PDT

The goal is to read a zip file from stdin and uncompress to stdout.

Funzip works and is the solution I am looking for, the zip contains a single file, unfortunately funzip fails when the compressed file size is around 1GB or greater:

funzip error: invalid compressed data--length error  

Update: I have discovered the above error may not indicate an actual error. Comparing two uncompressed files, one unzipped traditionally and the other through a pipe using funzip (with the above error written to stderr) the files are identical. I'd like to keep this open, so this can be confirmed or reported.

A related solution using python: Unzipping files that are flying in through a pipe

However this output is directed to a file.

msi for Web Deploy 3.6 for Hosting Servers... where to find?

Posted: 07 Aug 2021 08:05 PM PDT

On Win2012-R2

The Web Platform Installer offers an option (that I need...): "Web Deploy 3.6 for Hosting Servers"

enter image description here

I would like to get this into my dsc script, but cannot find the discrete msi(s) on download.microsoft.com or elsewhere on microsoft.com.

How to automate the installation of this puppy?

Proxmox vps container connection problems

Posted: 07 Aug 2021 07:00 PM PDT

I have Proxmox on my node server which have ip:5.189.190.* and I created openvz container on an ip : 213.136.87.* and installed centos 6 on it

The problem: Cann't connect to container ssh directly Can't open apache server centos welcome page When I enter container from the node can't ping any sites or wget any url but I can connect 127.0.0.1 and the main node ip

My Configuration: container /etc/resolv.conf

nameserver 8.8.8.8  nameserver 8.8.4.4  

container /etc/sysconfig/network-scripts/ifcfg-venet0

DEVICE=venet0  BOOTPROTO=static  ONBOOT=yes  IPADDR=213.136.87.*  NETMASK=255.255.255.0  BROADCAST=213.136.87.*  IPV6INIT="yes"  

container /etc/sysconfig/network-scripts/ifcfg-venet0

DEVICE=venet0:0  ONBOOT=yes  IPADDR=213.136.87.*  NETMASK=255.255.255.0  

node /etc/network/interfaces

# network interface settings  auto lo  iface lo inet loopback    auto eth0  iface eth0 inet manual    auto vmbr0  iface vmbr0 inet static          address  5.189.190.*          netmask  255.255.255.0          gateway  5.189.190.*          bridge_ports eth0          bridge_stp off          bridge_fd 0  

node /etc/resolv.conf having DC nameservers correctly

container ping result:

# ping google.com -c 3  ping: unknown host google.com  

container traceroute result:

# traceroute google.com  google.com: Temporary failure in name resolution  Cannot handle "host" cmdline arg `google.com' on position 1 (argc 1)  

node ping result:

# ping google.com -c 3  PING google.com (74.125.29.139) 56(84) bytes of data.  64 bytes from qg-in-f139.1e100.net (74.125.29.139): icmp_req=1 ttl=41 time=110 ms  64 bytes from qg-in-f139.1e100.net (74.125.29.139): icmp_req=2 ttl=41 time=110 ms  64 bytes from qg-in-f139.1e100.net (74.125.29.139): icmp_req=3 ttl=41 time=110 ms    --- google.com ping statistics ---  3 packets transmitted, 3 received, 0% packet loss, time 2000ms  rtt min/avg/max/mdev = 110.450/110.462/110.474/0.383 ms  

node traceroute result:

# traceroute google.com  traceroute to google.com (74.125.29.139), 30 hops max, 60 byte packets   1  ip-1-90-136-213.static.contabo.net (213.136.90.1)  0.506 ms  0.517 ms  0.513 ms   2  ffm-b11-link.telia.net (62.115.36.237)  0.493 ms  0.491 ms  0.484 ms   3  hbg-b1-link.telia.net (62.115.139.164)  15.379 ms  15.393 ms  15.384 ms   4  hbg-bb4-link.telia.net (213.155.135.88)  16.048 ms hbg-bb4-link.telia.net (213.155.135.86)  15.419 ms hbg-bb4-link.telia.net (213.155.135.84)  15.456 ms   5  nyk-bb1-link.telia.net (80.91.247.127)  96.568 ms nyk-bb2-link.telia.net (80.91.247.123)  107.638 ms nyk-bb1-link.telia.net (80.91.247.129)  96.582 ms   6  nyk-b6-link.telia.net (213.155.130.251)  105.478 ms  105.470 ms nyk-b6-link.telia.net (80.91.254.32)  101.005 ms   7  google-ic-303645-nyk-b6.c.telia.net (213.248.78.250)  101.235 ms  105.746 ms  105.719 ms   8  209.85.248.242 (209.85.248.242)  101.694 ms  106.213 ms  106.250 ms   9  209.85.249.212 (209.85.249.212)  101.225 ms 209.85.246.4 (209.85.246.4)  101.597 ms 209.85.252.242 (209.85.252.242)  101.179 ms  10  209.85.249.11 (209.85.249.11)  102.247 ms  112.917 ms 72.14.239.93 (72.14.239.93)  97.931 ms  11  64.233.174.9 (64.233.174.9)  104.733 ms 66.249.95.229 (66.249.95.229)  109.232 ms 66.249.95.231 (66.249.95.231)  106.086 ms  12  72.14.234.53 (72.14.234.53)  106.179 ms 72.14.238.73 (72.14.238.73)  110.471 ms 72.14.234.53 (72.14.234.53)  106.170 ms  13  * * *  14  qg-in-f139.1e100.net (74.125.29.139)  110.479 ms  110.656 ms  106.154 ms  

Any ideas will be welcomed

Thanks

Recovering data from MongoDB raw files

Posted: 07 Aug 2021 03:00 PM PDT

We use mongodb for our database and set the replset(two servers), but we mistakenly deleted some raw files that under /path/to/dbdata on both servers. After that, we used extundelete to get back the deleted files. We ran the extundelete on both servers and merge the results, like database.1, database.2 etc. We could not start the mongod, it raised the following error when starting mongod or executing mongodump, here is the console output:

root@mongod:/opt/mongodb# mongodump --repair --dbpath /opt/mongodb -d database_production  Thu Aug 21 16:22:43.258 [tools] warning: repair is a work in progress  Thu Aug 21 16:22:43.258 [tools] going to try and recover data from: database_production  Thu Aug 21 16:22:43.262 [tools]   Assertion failure isOk() src/mongo/db/pdfile.h 392  0xde1b01 0xda42fd 0x8ae325 0x8ac492 0x8bd8e0 0x8c1c51 0x80e345 0x80e607 0x80e6a4 0x6db92a     0x6dc1ff 0x6e0db9 0xd9e45e 0x6ccdc7 0x7f499d856ead 0x6ccc29   mongodump(_ZN5mongo15printStackTraceERSo+0x21) [0xde1b01]  mongodump(_ZN5mongo12verifyFailedEPKcS1_j+0xfd) [0xda42fd]  mongodump(_ZNK5mongo7Forward4nextERKNS_7DiskLocE+0x1a5) [0x8ae325]  mongodump(_ZN5mongo11BasicCursor7advanceEv+0x82) [0x8ac492]  mongodump(_ZN5mongo8Database19clearTmpCollectionsEv+0x160) [0x8bd8e0]  mongodump(_ZN5mongo14DatabaseHolder11getOrCreateERKSsS2_Rb+0x7b1) [0x8c1c51]  mongodump(_ZN5mongo6Client7Context11_finishInitEv+0x65) [0x80e345]  mongodump(_ZN5mongo6Client7ContextC1ERKSsS3_b+0x87) [0x80e607]  mongodump(_ZN5mongo6Client12WriteContextC1ERKSsS3_+0x54) [0x80e6a4]  mongodump(_ZN4Dump7_repairESs+0x3a) [0x6db92a]  mongodump(_ZN4Dump6repairEv+0x2df) [0x6dc1ff]  mongodump(_ZN4Dump3runEv+0x1b9) [0x6e0db9]  mongodump(_ZN5mongo4Tool4mainEiPPc+0x13de) [0xd9e45e]  mongodump(main+0x37) [0x6ccdc7]  /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xfd) [0x7f499d856ead]  mongodump(__gxx_personality_v0+0x471) [0x6ccc29]  assertion: 0 assertion src/mongo/db/pdfile.h:392  Thu Aug 21 16:22:43.271 dbexit:   Thu Aug 21 16:22:43.271 [tools] shutdown: going to close listening sockets...  Thu Aug 21 16:22:43.271 [tools] shutdown: going to flush diaglog...  Thu Aug 21 16:22:43.271 [tools] shutdown: going to close sockets...  Thu Aug 21 16:22:43.272 [tools] shutdown: waiting for fs preallocator...  Thu Aug 21 16:22:43.272 [tools] shutdown: closing all files...  Thu Aug 21 16:22:43.273 [tools] closeAllFiles() finished  Thu Aug 21 16:22:43.273 [tools] shutdown: removing fs lock...  Thu Aug 21 16:22:43.273 dbexit: really exiting now  

My env:

  1. Debian 3.2.35-2 x86_64(it's a XEN virtual machine)
  2. mongodb 2.4.6

and we did not delete the .0 and .ns files.

We tried to create a new database with the same name and copy these db.ns and db.2, db.3 to the new db, we still met the same error.

Is there any way to check the valid of raw .ns and datafiles, and how to recover the database?

2-way SSL with apache forward proxy

Posted: 07 Aug 2021 03:00 PM PDT

I'm working to set up Apache as a forward proxy with a client that uses 2-way SSL. The basic flow is myApplication --via http--> Apache proxy --via 2 way SSL--> client. After setting everything up, when I try to start Apache, I'm getting a "incomplete client cert configured for SSL proxy (missing or encrypted private key?)" error. What I can't figure out is that the client cert I'm using in the SSLProxyMachineCertificateFile directive has both the unencrypted private key and the public cert already. Any suggestions on what I'm missing and/or anything else I can try? Does the all-in-one machine cert need to have the chain in it as well?

Here's what my vhost looks like.

<VirtualHost *:8082>      ServerName my.domain.com        ProxyRequests On      SSLProxyEngine On        SSLProxyMachineCertificateFile /etc/httpd/keys/machine.pem      SSLProxyCACertificateFile /etc/httpd/keys/machine.chain.crt        ProxyPass / https://target.client.com/      ProxyPassReverse / https://target.client.com/        <Proxy *>              Order deny,allow              Allow from all      </Proxy>  </VirtualHost>  

EDIT: I updated the basic flow to clarify what kind of connection I'm trying to use between the application, apache, and the client.

Outlook 2010 cannot reply to encrypted email

Posted: 07 Aug 2021 10:09 PM PDT

A coworker and I occasionally use encrypted email to send passwords. We both are using Outlook 2010, and both of our Digital IDs were created by the same authority. For my coworker, creating, replying, and reading my encrypted emails works just fine. But for me, I can read his encrypted emails, I can send him encrypted emails, but I cannot reply to his encrypted emails. I always get the standard Outlook encryption error message:

"Microsoft Outlook had problems encrypting this message because the following recipients had missing or invalid certificates, or conflicting or unsupported encryption capabilities:"

It then lists his correct email address and offers to Send Unencrypted or Cancel.

Any ideas what could cause this? If I choose Send Unencrypted, or unselect Encryption before sending, the email goes through.

Update: when I reply to an encrypted message, if I delete the email address in the to box, and then retype the exact same email address, it works. This made me think I had duplicate addresses for my coworker, so I deleted him completely from my contact list. I know he's not in there at all because it can't find him when I try to send one. I had him send me a new encrypted email and also sign it. I can reply to this email. Then I added him to my contact list again, but still I can't reply to other encrypted emails. If I right click on his address, I can view the contact card and see the cert is in there, but it doesn't send. It also shows the error message described above twice. (I have to Cancel out twice.)

Update 2: When the error pops up, if I choose the option for Send Unencrypted, I get another error message: "The operation failed. The messaging interfaces have returned an unknown error. If the problem persists, restart Outlook. Cannot resolve recipient." If I then press OK, and try to send again, it sends successfully (unencrypted). I think the last part of that error message "cannot resolve recipient" is relevant to what's going on. It seems that the email in the To field is misbehaving, but only when it's first populated via reply.

Update 3: I just had a new scenario, which is related: I replied to a regular (unencrypted) email, decided to encrypt it, and had the same problem. (Same person.) I wiped out the email address in the "To" box, re-entered it identically, and then it sent. So, the title of this post might better be described as "Outlook 2010 cannot encrypt an email reply".

Problems hosting a Jetty application on the same server as IIS.

Posted: 07 Aug 2021 07:00 PM PDT

I am a .Net programmer, lately developing a website in Jsp, using Jetty. I use Eclipse and the Maven-Jetty plugin.

I have a virtual private server, which has IIS installed and is serving .NET websites. My domain name (for the Jsp website) redirects to this server.

My question is: How do I connect the domain name to the website in Jetty? Jetty listens to port 8080, and IIS to port 80.

I tried configuring a virtual host in a Jetty configuration file (jetty-web.xml) (followed this manual), the result is: when I open a browser inside my server and navigate to mydomainname.com:8080 I get to the website. But if I do it externally, I get nothing.

  1. What do I need to configure in order to get to my website?
  2. How do I overcome the 8080 port number? or do I need to redirect my domain name to this port?

Thank You

No comments:

Post a Comment