Sunday, May 29, 2022

Recent Questions - Server Fault

Recent Questions - Server Fault

Recent Questions - Server Fault


Combining security key based login with sshfs mount on PXE booted live system

Posted: 29 May 2022 11:25 AM PDT

TL;DR: What is the best way to mount user homes via SSHFS (or any other encrypted protocol) at login while enforcing the use of security keys like Yubikeys and Nitrokeys?

The long version: I need to build a new network consisting of (initially at least) one central server and multiple clients. Idea is to let the clients boot via PXE and then mount all needed folders via SSHFS. That should be no problem at all with a correctly configured pam_mount if I only wanted to use username and password, but I also need to enforce the necessity to use a Yubikey as second factor to unlock the SSHFS mounts.

Do you know of any more or less ready-to-use solutions which I can use to reach my goal?

For the sake of completeness of course I also had a few ideas, but I don't know if one of them is actually feasible:

  1. Using pam_exec in combination with a custom script acting as a wrapper around all necessary steps. A server daemon would check the basic authentication using username+password, and, if successful, respond with a (FIDO2) challenge which the client would use to generate a FIDO2 assertion which again the server could check and, if successful, generate a temporary SSH key of which the public key would be added the authorized_keys file and the private key be returned to the client. After a specific amount of time the server would remove this entry again. I am quite sure this is feasible.

  2. Again using pam_exec and a custom script but this time adding Keycloak to the server setup, so that the server daemon would act as a Service Provider in terms of OpenID Connect. I think, the problem here is that the client (wrapper) script would need to be able to authenticate to Keycloak because "Direct Grant"/"Resource Owner Grant" cannot be used with two-factor authentication based on hardware tokens.

Thanks in advance for any hint.

Pfsense ftp connection

Posted: 29 May 2022 10:33 AM PDT

I have set up my pfsense firewall and want to make a connection to my ftp server from the outside. i have set up a port forward to my server with the following data (img 1/2) but i cant seem to make a connection.

I have installed a ftp package.

img 1img 2

my kind regarts

Reject mail based on FROM domain

Posted: 29 May 2022 09:38 AM PDT

I'm wondering if it's possible to setup filter in sieve to catch the FROM domain and match that with the TO recipient mailbox name.

Usage is to filter unwanted emails when companies sell/share my personal information.

ex.

This should be accepted:

FROM: no-reply@some-company.com  TO: some-company@mydomain.com  

This should be rejected:

FROM: no-reply@other-company.com  TO: some-company@mydomain.com  

dial tcp i/o timeout then logging in to minio

Posted: 29 May 2022 09:15 AM PDT

I've setup an minio installation via docker on one of my servers. I can access the login screen without a problem. However, when the login itself does not work.

Post "https://example.com:9000/": dial tcp :9000: i/o timeout

What could be the reason for this?

This is my docker-compose.yml

version: '3.7'    services:    minio:      image: minio/minio:RELEASE.2022-05-19T18-20-59Z.fips      command: server -C /etc/minio --address ":9000" --console-address ":9001" /data      ports:        - "9000:9000"        - "9001:9001"      environment:        MINIO_SERVER_URL: "https://example.com:9000"        MINIO_ROOT_USER: "minioadmin"        MINIO_ROOT_PASSWORD: "minioadmin"      volumes:        - minio:/data        - /etc/letsencrypt:/etc/letsencrypt/        - /etc/minio:/etc/minio/          volumes:    minio:      

there is also an nginx running on that server but I don't think that's the issue.

Logrotate Create Mode Issue

Posted: 29 May 2022 08:45 AM PDT

I have a trouble with logrotate service in linux. I have a logrotate config for mongodb log as below:

/var/log/mongodb/mongod.log  {     rotate 10     daily     dateext     dateformat %Y-%m-%d-%s     dateyesterday     missingok     create 644 mongodb mongodb     delaycompress     compress     sharedscripts     postrotate       /bin/kill -SIGUSR1 $(pgrep mongod)     endscript  }  

As can be seen, I expect that mode of new mongodb file to be 644 but it is 600 and only the closed log file mode is 644.

ls -l command output:

total 640  -rw------- 1 mongodb mongodb  9822 May 29 19:42 mongod.log  -rw-r--r-- 1 mongodb mongodb     0 May 29 19:29 mongod.log.2022-05-29T14-59-01  

I don't understand what problem is exactly.

Best AWS service to host a software that can listen on the given ports [closed]

Posted: 29 May 2022 07:36 AM PDT

I am looking for the correct AWS service to use to host a software.

The software itself includes modules that acts as a mini server, the users can start the module and then it will listen on the given port and should be accessible externally.

Currently, I am using AWS Ec2 to do it, as it allow me to open all the ports 0-65535 and it gives the public IP too. I am wondering if there is any better and cheaper alternative for this use case? I heard EKS,etc.

Rewrite DNS requests using iptables

Posted: 29 May 2022 07:19 AM PDT

The local PC is behind a NAT, and say has a local address of 192.168.1.234, and the public IP is 1.2.3.4. If it is desired to have port 23451 open to the outside world and for it to behave exactly like the local Ubuntu's systemd-resolved does on local PC's port 53 (keeping in mind that systemd-resolved only accepts requests directed to 127.0.0.53:53), what are the correct iptables commands to enable this redirection of ports and responses, where incoming traffic from outside WAN addresses (say, 5.6.7.8) on local PC's port 23451 gets redirected to local PC's port 53 as if it comes from the local PC (instead of the outside WAN address), and the response of systemd-resolved then gets redirected back as a response to the incoming request's outside WAN address on the original port 23451? There's some iptables rules to do that, but I'm not sure which queue is the right one, and whether -t nat needs to be specified or not.

Alert created with wazuh-logtest but not in real

Posted: 29 May 2022 06:56 AM PDT

I created a custom decoder and a custom rule to generate alerts when receiving UniFi logs via syslog. When I use the wazuh-logtest binary to test these with a UniFi log, the custom rule is triggered and an alert is generated. But in real, nothing happens...

Here are my decoder and rule :

<decoder name="unifi">      <prematch type="pcre2">UAP-</prematch>  </decoder>    <rule id="100013" level="5">      <decoded\_as>unifi</decoded\_as>      <description>UniFi wifi log</description>  </rule>  

Here is how I configured my Wazuh manager to listen for Syslog :

<remote>      <connection>syslog</connection>      <port>514</port>      <protocol>udp</protocol>      <allowed-ips>my LAN IP range</allowed-ips>  </remote>  

For now they are really simple, as I just want to trigger the rule and have an alert generated with any message received from the UniFi controller. I want to be sure that the log matches with my decoder. No need to extract any information for now.

FYI, here's what an UniFi log looks like (listened with a Syslog server) :

May 28 17:36:23 wap001 78455819c06f,UAP-AC-InWall-6.0.18+13660: kernel: [ 205.373214] ol_ath_vap_set_param: Now supported MGMT RATE is 6000(kbps) and rate code: 0x3  

As I said, it triggers the rule and creates an alert when I try it with /var/ossec/bin/wazuh-logtest, but not in real use.

I already configured the same stuff for Synology logs and it works great. But for Unifi it doesn't.

I am using Wazuh v4.2.5 and UniFi controller v7.1.65 My Wazuh and Unifi servers are both Debian VMs. The Wazuh agent is not installed on the Unifi controller, I only want to use Syslog for now.

Many thanks for your help !

First asked on Reddit

Nagios on a virtual network

Posted: 29 May 2022 06:46 AM PDT

I am trying to emulate a virtual network in Kathara (ex. NetKit), based on OSPF and BGP routing and I am new to this. After emulating the network I need to monitor it using Nagios, but my question is how is it possible? My virtual network is running on an Ubuntu distro, do I need another virtual machine where I should install Nagios, or how does it work exactly? I am new to these technologies and I don't really understand what point I am missing. From what I know Nagios should be installed on a server and the NRPE on the monitored one..but in this situation I just cannot see clearly.

Thank you in advance and excuse me if any mistakes have been made.

Apache2 on Ubuntu EC2 goes down and does not restart

Posted: 29 May 2022 05:01 AM PDT

History:

We moved a Codeigniter 3 Installation from Bluehost to a T3.2xlarge. That single instance is hosting apache2 and a mysql server as a local database.

On Bluehost, the instance was running fine, migration was done since Bluehost itself had outages and we wanted more reliable hosting.


Error

Since the migration the Page is randomly going down completely. Trying to restart apache2 with:

sudo servive apache2 restart  

Does not work, it requires a full reboot of the EC2 instance to get the service running again. After rebooting EC2, apache2 and mysql is running and the page is up without starting the services after the reboot of the instance.


Debug attempt 1

Since the page went down when database intense crons were run, I assumed the mysql server is the bottleneck. Migrating the full database into a serverless RDS should eliminate all database related bottlenecks. The same database intense crons are finishing now. To further eliminate the cron being the reason for the system going down, I cloned the EC2 and used the clone to run the cron while the original hosts the webpage the domain is pointing to.

However, random outages still persist.


Debug attempt 2

Assuming it is a memory issue, after checking phpinfi.php I saw that PHP had 128Mb of RAM ( on a 32Gb machine ), so just to see if more RAM helps:

  1. memory_limit set to 8192Mb
  2. reboot the EC2
  3. restart php7.4-fpm
  4. restart apache2

phpinfo confirmed the memory_limit is set to 8192M.

Random outages still persists.


Debug attempt 3

Checking the command:

sudo apache2ctl -t  

returns:

Syntax OK

Checking the command:

nano /var/log/apache2/error.log  

contains:

[mpm_worker:notice] AH00295: caught SIGTERM, shutting down

So I assume, that Apache somehow is shutting down for some reason but is not able to restart.

Checking the command:

sudo service apache2 restart  

does not throw errors

Checking the command:

sudo apache2ctl restart  

does not throw errors

Checking the command:

/usr/sbin/apache2 -V  

shows:

[core:warn] [pid 24560] AH00111: Config variable ${APACHE_RUN_DIR} is not defined

apache2: Syntax error on line 81 of /etc/apache2/apache2.conf: DefaultRuntimeDir must be

a valid directory, absolute or relative to ServerRoot

Server version: Apache/2.4.41 (Ubuntu)

Server built: 2022-03-16T16:52:53

Server's Module Magic Number: 20120211:88

Server loaded: APR 1.6.5, APR-UTIL 1.6.1

Compiled using: APR 1.6.5, APR-UTIL 1.6.1

Architecture: 64-bit

Server MPM:

Server compiled with....

-D APR_HAS_SENDFILE

-D APR_HAS_MMAP

-D APR_HAVE_IPV6 (IPv4-mapped addresses enabled)

-D APR_USE_SYSVSEM_SERIALIZE

-D APR_USE_PTHREAD_SERIALIZE

-D SINGLE_LISTEN_UNSERIALIZED_ACCEPT

-D APR_HAS_OTHER_CHILD

-D AP_HAVE_RELIABLE_PIPED_LOGS

-D DYNAMIC_MODULE_LIMIT=256

-D HTTPD_ROOT="/etc/apache2"

-D SUEXEC_BIN="/usr/lib/apache2/suexec"

-D DEFAULT_PIDLOG="/var/run/apache2.pid"

-D DEFAULT_SCOREBOARD="logs/apache_runtime_status"

-D DEFAULT_ERRORLOG="logs/error_log"

-D AP_TYPES_CONFIG_FILE="mime.types"

-D SERVER_CONFIG_FILE="apache2.conf"

Where I can see 2 things:

  • there is a issue with ${APACHE_RUN_DIR}
  • Server MPM does not return a MPM

Checking the command:

 apache2 -l  

returns:

Compiled in modules:
core.c
mod_so.c
mod_watchdog.c
http_core.c
mod_log_config.c
mod_logio.c
mod_version.c
mod_unixd.c

What does not show a MPM module.

Checking the command:

apache2 -l  apache2ctl -l  

returns:

Compiled in modules:
core.c
mod_so.c
mod_watchdog.c
http_core.c
mod_log_config.c
mod_logio.c
mod_version.c
mod_unixd.c

Checking the command:

a2query -M  

returns:

worker


Question:

And this is where I have been kinda stuck now. Is there anything else I can check or read from debug attempt 3 to see why apache stops/does not restart and requires a full server reboot?

Can't Recover Space from qcow2 image without deleting wanted snapshots

Posted: 29 May 2022 09:34 AM PDT

I have a virtual machine that started out with 5 snapshots 1,2,3,4,5.

I used qemu-img to delete snapshots 1,2,3. Snapshots 4 and 5 are still needed and were not deleted.

How can I release the space used by snapshots 1,2,3 and retain snapshots 4 and 5?

I have spent a lot of time searching for a solution, and the solutions I have tried got the following results:

  • Using qemu-img convert (Deleted needed snapshots) qemu-img convert -O qcow2 Linux.qcow2 Linux_s.qcow2 Reduces the space used by Linux.qcow2, but deletes snapshots 4 and 5.

  • Using virt-sparsify (Doesn't recover any space) cp Linux.qcow2 TEST.qcow2 sudo virt-sparsify --in-place TEST.qcow2

    Retains the two undeleted snapshots, but doesn't recover the space that was originally occupied by snapshots 1,2,3.

How can I recover the space in a qcow2 file after deleting snapshots, while retaining remaining snapshots?

I find it difficult to understand why there isn't an easy way to achive this that is clearly documented

Edit: Is there any way to copy Snapshot 4 to a new file as a snapshot, and then add the delta for snapshot 5. Then I could just discard the extra file with the wasted space.

Kubernetes Nginx Ingress could not load custom certificate from cert-manager

Posted: 29 May 2022 05:09 AM PDT

I am using cert-manager with this custom wildcard certificate

apiVersion: cert-manager.io/v1  kind: ClusterIssuer  metadata:    name: letsencrypt-myapp-issuer  spec:    acme:      server: https://acme-v02.api.letsencrypt.org/directory      email: app@example.com # CHANGE-ME      privateKeySecretRef:        name: wildcard-myapp-com      solvers:        # ACME DNS-01 provider configurations        - dns01:            cloudDNS:              serviceAccountSecretRef:                name: clouddns-service-account                key: dns-service-account.json              project: myapp          selector:            dnsNames:              - '*.myapp.com'              - myapp.com  ---  apiVersion: cert-manager.io/v1  kind: Certificate  metadata:    name: myapp-com-tls    namespace: default  spec:    secretName: myapp-com-tls    issuerRef:      name: letsencrypt-myapp-issuer    commonName: '*.myapp.com'    dnsNames:      - '*.myapp.com'      - myapp.com  

I am deploying Nginx ingress with kustomize

spec:    template:      spec:        containers:        - name: controller          args:          - /nginx-ingress-controller          - --publish-service=$(POD_NAMESPACE)/ingress-nginx-controller          - --election-id=ingress-controller-leader          - --controller-class=k8s.io/ingress-nginx          - --ingress-class=nginx          - --configmap=$(POD_NAMESPACE)/ingress-nginx-controller          - --validating-webhook=:8443          - --validating-webhook-certificate=/usr/local/certificates/cert          - --validating-webhook-key=/usr/local/certificates/key          - --default-ssl-certificate=default/myapp-com-tls # NOTE THIS LINE  

When I open the logs of the ingress controller, I could see this error

Error loading custom default certificate, falling back to generate ││ local SSL certificate default/myapp-com-tls was not found

What I can do to troubleshoot this?

UPDATE

If I run

kubectl get secret myapp-com-tls --namespace default

It returns nothing. However, if I run

kubectl get secret myapp.com-tls-qpmpr --namespace default

It returns

NAME                      TYPE     DATA   AGE  myapp.com-tls-qpmpr   Opaque   1      47m  

However, if I change to this on YAML, I get the same error

--default-ssl-certificate=default/myapp.com-tls-qpmpr  
$ kubectl describe certificates myapp-com-tls -n cert-manager  Error from server (NotFound): certificates.cert-manager.io "myapp-com-tls" not found  

Cloud Functions return status 500 and doesn't show in registries

Posted: 29 May 2022 08:01 AM PDT

I'm getting a strange behaviour, I have a few http functions in Firebase Cloud Functions. They work perfectly, but there are days that they start returning status 500 for a while and then go back to working normal for a few minutes and then start returning status 500 again, this behaviour remains for the entire day.

The most strange part is that I don't get any error messages on my stack driver, in fact, there are no registries about these calls, it is as if the calls doesn't reach google's services somehow or it is just rejected and there are no registries about it.

I'll post the implementation of one of the most used functions in my application:

import * as functions from 'firebase-functions';  import * as admin from 'firebase-admin';    admin.initializeApp()    exports.changeOrderStatus_1 = functions.https.onRequest((request, response) =>  {      //Check Headers      const clientID = request.get('ClientID');        if(clientID === null || clientID === undefined || clientID === "")      {          console.error(new Error('clientID not provided.'));          return response.status(500).send('clientID not provided.');      }        const unitID = request.get('UnitID');        if(unitID === null || unitID === undefined || unitID === "")      {          console.error(new Error('unitID not provided.'));          return response.status(500).send('unitID not provided.');      }        //Check body      const orderID = request.body.OrderID;        if(orderID === null || orderID === undefined || orderID === "")      {          console.error(new Error('orderID not provided.'));          return response.status(500).send('orderID not provided.');      }        const orderStatus = request.body.OrderStatus;        if(orderStatus === null || orderStatus === undefined || orderStatus === "")      {          console.error(new Error('orderStatus not provided.'));          return response.status(500).send('orderStatus not provided.');      }        const orderStatusInt = Number.parseInt(String(orderStatus));        const notificationTokenString = String(request.body.NotificationToken);        const customerID = request.body.CustomerID;        const promises: any[] = [];        const p1 = admin.database().ref('Clients/' + clientID + '/UnitData/'+ unitID +'/FreshData/Orders/' + orderID + '/Status').set(orderStatusInt);        promises.push(p1);        if(notificationTokenString !== null && notificationTokenString.length !== 0 && notificationTokenString !== 'undefined' && !(customerID === null || customerID === undefined || customerID === ""))      {          const p2 = admin.database().ref('Customers/' + customerID + '/OrderHistory/' + orderID + '/Status').set(orderStatusInt);            promises.push(p2);            if(orderStatusInt > 0 && orderStatusInt < 4)          {              const p3 = admin.database().ref('Customers/' + customerID + '/ActiveOrders/' + orderID).set(orderStatusInt);                promises.push(p3);          }          else          {              const p4 = admin.database().ref('Customers/' + customerID + '/ActiveOrders/' + orderID).set(null);                promises.push(p4);          }            let title = String(request.body.NotificationTitle);          let message = String(request.body.NotificationMessage);            if(title === null || title.length === 0)              title = "?????";            if(message === null || message.length === 0)              message = "?????";            const payload =           {              notification:              {                  title: title,                  body: message,                  icon: 'notification_icon',                  sound : 'default'              }          };            const p5 = admin.messaging().sendToDevice(notificationTokenString, payload);            promises.push(p5);      }        return Promise.all(promises).then(r => { return response.status(200).send('success') })          .catch(error =>               {                  console.error(new Error(error));                  return response.status(500).send(error)              });  })  

And this is how I invoke it, the client application is running on Xamarin Forms app usinde the c# language:

        static HttpClient Client;            public static void Initialize()          {              Client = new HttpClient();              Client.BaseAddress = new Uri("My cloud functions adress");              Client.DefaultRequestHeaders.Add("UnitID", UnitService.GetUnitID());              Client.DefaultRequestHeaders.Add("ClientID", AuthenticationService.GetFirebaseAuth().User.LocalId);          }      public static async Task<bool> CallChangeOrderStatus(OrderHolder holder, int status)          {              Debug.WriteLine("CallChangeOrderStatus: " + status);                try              {                  var content = new Dictionary<string, string>();                    content.Add("OrderID", holder.Order.ID);                  content.Add("OrderStatus", status.ToString());                                    if (!string.IsNullOrEmpty(holder.Order.NotificationToken) && NotificationService.ShouldSend(status))                  {                      content.Add("CustomerID", holder.Order.SenderID);                      content.Add("NotificationToken", holder.Order.NotificationToken);                      content.Add("NotificationTitle", NotificationService.GetTitle(status));                      content.Add("NotificationMessage", NotificationService.GetMessage(status));                  }                    var result = await Client.PostAsync("changeOrderStatus_1", new FormUrlEncodedContent(content));                    return result.IsSuccessStatusCode;              }              catch (HttpRequestException exc)              {  #if DEBUG                  ErrorHandlerService.ShowErrorMessage(exc);  

No comments:

Post a Comment