Tuesday, February 1, 2022

Recent Questions - Server Fault

Recent Questions - Server Fault


Reposilite - configure mirror

Posted: 01 Feb 2022 03:23 AM PST

I'm configuring reposilite to hosts company releases. I'd like to set it up also as mirror for public projects we integrate and use to improve performance.

Right now I'm running version 2 (2.9.26 to be exact) because 3 appears to be too unstable.

What I tried (but does not work) is set the proxied section as follows

proxied [    https://europe-maven.pkg.dev/jxbrowser/releases    https://www.license4j.com/maven    https://jitpack.io    https://repo1.maven.org/maven2  ]  

sample of my .pom

  <properties>  <jxbrowser.version>7.19</jxbrowser.version>  </properties>  ...  <repositories>          <repository>              <id>com.teamdev</id>          <!--    <url>https://europe-maven.pkg.dev/jxbrowser/releases</url> -->          <url>http://my-local-maven.local/jxbrowser/releases</url>          </repository>  </repositories>  ....  <dependencies>      <dependency>              <groupId>com.teamdev.jxbrowser</groupId>              <artifactId>jxbrowser</artifactId>              <version>${jxbrowser.version}</version>          </dependency>  <dependencies>  

After mvn package -U I get:

Failed to read artifact descriptor for com.teamdev.jxbrowser:jxbrowser-javafx:jar:7.19: Could not transfer artifact com.teamdev.jxbrowser:jxbrowser-javafx:pom:7.1  9 from/to com.teamdev (>http://my-local-maven.local/jxbrowser/releases): Cannot access >http://my-local-maven.local/jxbrowser/releases with type default using the available connector factories:  BasicRepositoryConnectorFactory: Cannot access >http://my-local-maven.local/jxbrowser/releases using the registered transporter factories: WagonTransporterFactory: Unsupported transport protocol >  

Sending push notification to s3 origin website from lambda

Posted: 01 Feb 2022 03:21 AM PST

I want to send a websockets notification from lambda function to clients, on a website hosted via s3 origin with cloudfront.

Could someone help me with this, please? Thanks.

Execute sequence of commands in fortinet

Posted: 01 Feb 2022 02:35 AM PST

I would like to execute three commands in a fortinet firewall, the commands are:

#To enter in the config mode:  config vdom  #To select the virtual domain:  edit "name"  #To see the info I want:  get router info routing-table static  

But I need to do it remotely, to do that, I try this:

ssh xx@xx "config vdom; edit "xxx"; get router info routing-table static"  

When I do that, it executes only the command 1 and gives an error in the second and the third.

I tried changing the command to something like this and it executes 1 and 2, but not the third:

ssh xx@xx "config vdom edit "xxxx"; get router info routing-table static"  

And I tried the same for the third one but it does not work...

Looks like it executes the commands independently and not in a sequence.

Is there a way to do that in a single command?

STMP connection error with roundcube

Posted: 01 Feb 2022 03:25 AM PST

recently I was trying to install roundcube on my server and everything works fine until smtp test. When I try to connect i always get this error: SMTP send: NOT OK(Connection failed: Failed to connect socket: fsockopen(): unable to connect to ssl://mx.mydomain.me:587 (Unknown error))

Syslog at time of connection:

Feb  1 11:14:35 mx postfix/smtpd[196497]: initializing the server-side TLS engine  Feb  1 11:14:35 mx postfix/smtpd[196497]: connect from Ubuntu-2004-focal-64-minimal-hwe[ip]  Feb  1 11:14:35 mx postfix/smtpd[196497]: lost connection after CONNECT from Ubuntu-2004-focal-64-minimal-hwe[ip]  Feb  1 11:14:35 mx postfix/smtpd[196497]: disconnect from Ubuntu-2004-focal-64-minimal-hwe[ip] commands=0/0  

My postfix config:

# See /usr/share/postfix/main.cf.dist for a commented, more complete version      # Debian specific:  Specifying a file name will cause the first  # line of that file to be used as the name.  The Debian default  # is /etc/mailname.  #myorigin = /etc/mailname    smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu)  biff = no    # appending .domain is the MUA's job.  append_dot_mydomain = no    # Uncomment the next line to generate "delayed mail" warnings  #delay_warning_time = 4h    readme_directory = no    # See http://www.postfix.org/COMPATIBILITY_README.html -- default to 2 on  # fresh installs.  compatibility_level = 2        # TLS parameters  smtpd_tls_cert_file = /etc/letsencrypt/live/mx.mydomain.me/fullchain.pem  smtpd_tls_key_file = /etc/letsencrypt/live/mx.mydomain.me/privkey.pem  smtpd_tls_security_level = may    smtp_tls_CApath=/etc/ssl/certs  smtp_tls_security_level = may  smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache  smtp_tls_note_starttls_offer = yes    smtpd_relay_restrictions = permit_mynetworks permit_sasl_authenticated defer_unauth_destination  myhostname = mx.raveoultion.me  alias_maps = hash:/etc/aliases  alias_database = hash:/etc/aliases  myorigin = /etc/mailname  mydestination = $myhostname, mydomain.me, mx.mydomain.me, localhost.mydomain.me, localhost  relayhost =  mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128  mailbox_size_limit = 0  recipient_delimiter = +  inet_interfaces = all  inet_protocols = ipv4  smtp_tls_note_starttls_offer = yes  smtpd_tls_loglevel = 4  smtpd_sasl_path = private/auth  smtpd_sasl_local_domain =  smtpd_sasl_security_options = noanonymous  broken_sasl_auth_clients = yes  smtpd_sasl_auth_enable = yes  smtpd_recipient_restrictions = permit_sasl_authenticated,permit_mynetworks,reject_unauth_destination  smtpd_tls_received_header = yes  

Also, I have ufw rule for my port.

After some testing, now i also get this error:

 mx postfix/smtpd[208262]: warning: SASL: Connect to private/auth failed: No such file or directory   mx postfix/smtpd[208262]: fatal: no SASL authentication mechanisms  

I have libsas12 installed

Port forwarding through VPN for NAT penetration

Posted: 01 Feb 2022 02:27 AM PST

Problem:

  • I want Service1 listening on port XXXX hosted on my local windows 10 machine to be accessible to public, but my local machine is behind ISP NAT.

  • I have a Remote Sever with static public IP

Question:

What should be configured on those nodes:

Public Client-->Remote Server-->VPN tunnel-->Local Router-->Local Machine-->Local Service

Progress:

I have done something similar successfully with another Service, let's call it Service2 with port YYYY:

  • Port YYYY is open on Remote Server.
  • On Remote Server, Port YYYY is forwarded to Local Router's VPN client IP address.
  • On Local Router, traffic with DSCP tag XX is policy routed to VPN interface.
  • On Local Machine, Service2 is DSCP tagged XX.

However, Service2 only uses UDP, while Service1 uses both TCP and UDP, and for some reason, DSCP tag does not work on TCP on Local Machine.

For Service1:

  • Port XXXX is open on Remote Server.
  • On Remote Server, Port XXXX is forwarded to Local Router's VPN client IP address.
  • On Local Router, traffic with "local port" of XXXX is policy routed to VPN interface.

Have I done this correctly ?

OpenStack: Too many Database idle Connections

Posted: 01 Feb 2022 02:15 AM PST

I use PostgreSQL as a DB backend for all OpenStack Services on 3 Controllers. When all 3 controllers are started with no load, the number of connections are up to 700. And all of this conns are in idle mode.

Here are the Database parameters in Openstack service conf files:

max_overflow = 50  max_pool_size = 5  pool_timeout = 30  connection_recycle_time = 600  

I have changed this parameter to smaller values, but it has not improved anything.

Any ideas why the number of connections are too high?

Best was to extract attachments in incoming mails in MS Exchange

Posted: 01 Feb 2022 01:53 AM PST

I have to administer a MS Exchange mailserver with 20 recipients. One key to a better IT security in my opinion is to filter/extract all attachments which are not needed. We dont need to receive exe, etc. - just PDF and JPG is fine. The problem is, that sometimes they are inside ZIPs.

I tried TrendMicro and F-Secure, both are unreliable to me.

But sometime we do need these attachments, so the Admin should be able to recover this.

Whats the Best way to do it?

How do I log a particular user login event in my active directory

Posted: 01 Feb 2022 01:40 AM PST

I have to log a particular "install" user logon in my Active directory. This user has the admin rights to install software and I need to understand where and who is using it.

My first attemp was to create a powershell script to send me a log email triggered by a scheduled task raised with event id 4624 (logon).

The script filters the username to discard all other logins and send the email. To access event viewer data I used a "task user" with required grants.

I had to disable this task because it was triggered by many events, I may say all user and computer logins in my network. For each login the triggered task was raised using the "task user" triggering another logon event. This logon triggers another event and so on...

How can I monitor "install" logins in my network without falling into a storm of log events?

creating a linux local update mirror (cache) and automating it

Posted: 01 Feb 2022 01:14 AM PST

Hi i run a few different linux distributions and architectures and have some from server so slow that they update in KB/s for example with ubuntu on ARM. I'm learning up automation, how would i go about

  • creating a local only linux mirror/cache
  • updating that mirror/cache regularly automatically
  • automating the router to wake up the file server to update that cache and shutting it down when finished
  • additionally if possible, update other VMs/devices after the file server updates.

I have access to my router to create static DNS entries as well and i have a file server that consumes a lot of power when turned on but it is very fast (does gzip close to 1GB/s which is the raid's peak performance). It takes more than 200W just keeping it on and no matter what i do cant reduce the wattage to under 100W so i decided to just have it at max performance and only turn it on when i need it.

I bricked one of the ARM boards OS and cant open it up to access the SD card, as one of the ubuntu upgrades went too slow that it timed out and messed up the firmware update as an example. I figured since i use a few distros very often that i might as well have the update process done from a local cache as an exercise to learning ansible.

I would like help in learning how to do this be it here or links to resources in how to do what i'd like.

How to scale Wildfly servers on Azure VMs

Posted: 01 Feb 2022 12:55 AM PST

We have 3 virtual machines on Azure, each with a Wildfly app server and around 20 web applications with 300GB of SQL data. We are happy with the performance but our concern is security (databases are on same disks as the linux OS)

What are the best practices in such cases? separate the data to a dedicated virtual machine, or just to a separate disk?

We were thinking about separating the data and using only two wildfly servers on 2 separate vms, but arent 30 web applications too much for one instance of wildfly? (one application is around 120MB in size and contains 100 web services)

DiskPressure in Kubernetes nodes

Posted: 01 Feb 2022 12:49 AM PST

I have a Kubernetes cluster deployed in google cloud with several deployments. The problem is that now I'm trying to apply new deployments and I'm having a trouble because almost all the pods deployed gives DiskPressure error because of low space in this path:/mnt/stateful_partition.

There is any way to increase this partition? The problem I've seen is that the filesystem is read-only so I cannot free up space deleting some logs or something like that... I'm completely blocked with that..

Any idea?

Prevent Apache from writing to a file

Posted: 01 Feb 2022 12:35 AM PST

I have a website based on a CMS, running on Apache 2.4/PHP7.4. The CMS has an admin interface and changes you make there are written to config files inside the web root (/var/www/html/...). As a crude security measure, I thought I'd prevent writing to these files by changing file permission and ownership.

Apache runs as the www-data user, and the normal permissions for the files are 644 www-data:www-data. If I chmod and chown the files to 444 otheruser:otheruser and click "save" inside the CMS, the file is still written and it is also changed back to 644 www-data:www-data.

The containing directory has 777 otheruser:otheruser (for some reason). otheruser is member of the sudo group, if that somehow matters.

Is my approach doomed? What gives Apache/PHP the power to control these files regardless of ownership and permissions? Does it have to do with the fact that one of the many apache2 processes runs as root?

HP's Instant INk 888 422 OO 27 Printer Customer Service Ok

Posted: 01 Feb 2022 12:33 AM PST

Make sure the correct ink or toner cartridges are installed, and that the printer has sufficient ink or toner for your print job. Make sure no error messages or blinking lights display on the printer control panel. Resolve any errors before you use the printer. Restart the printer to clear any error states.

PostgreSQL as stateful k8s application - issue with mounted volumes ( in regards to ownership)

Posted: 01 Feb 2022 03:26 AM PST

I am trying to adopt a stateful k8s PostgreSQL cluster based on this article to the local environment in my company.

EDIT
this is a vmware tanzu cluster, which I do not have not set up myself, so I do not have any further details on the natrue of the cluster itself. I have added a StorageClass which I am referencing to

> kubectl version  Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.8", GitCommit:"5575935422cc1cf5169dfc8847cb587aa47bac5a", GitTreeState:"clean", BuildDate:"2021-06-16T13:00:45Z", GoVersion:"go1.15.13", Compiler:"gc", Platform:"linux/amd64"}  Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.8+vmware.1", GitCommit:"3e397df2f5dadadfa35958ec45c14b0e81abc25f", GitTreeState:"clean", BuildDate:"2021-06-21T16:59:40Z", GoVersion:"go1.15.13", Compiler:"gc", Platform:"linux/amd64"}  

end EDIT

There is a custom PostgreSQL image which mounts 3 volumes as

  • /opt/db/data/postgres/data
  • /opt/db/backup/postgres/backups
  • /opt/db/backup/postgres/archives

When applying those file (in the order as they are listed below) up the cluster the postgres pod does not spin up and the logs report a problem with access rights.

> kcl logs pod/postgres-stateful-0  starting up postgres docker image:  postgres -D /opt/db/data/postgres/data  + echo 'starting up postgres docker image:'  + echo postgres -D /opt/db/data/postgres/data  + '[' '!' -d /opt/db/data/postgres/data ']'  + '[' '!' -O /opt/db/data/postgres/data ']'  + mkdir -p /opt/db/data/postgres/data  + chmod 700 /opt/db/data/postgres/data  chmod: changing permissions of '/opt/db/data/postgres/data': Operation not permitted  

this stems from the docker-entrypoint.sh running on container creation.

the script checks whether the $PGDATA dir (/opt/db/data/postgres/data) exists and whether it is owned my the postgres user. Actually the Dockerfile from the custom image creates this correctly so the mkdir and chmod action should be skipped and the container should be started.

This works when you just run a single pod based on that image.

So I am guessing that mounting the Volums inside the container somehow srews up the ownership and I am wondering how to get around this or in other words define owner and acces rights for the mount paths inside the to be created container.

Can anybody point me to the right direction on how to solve this? I could not even say whether it's the statefulset.yml or the storage.yaml that needs to be adjusted


Image creation

ARG REGISTRY=docker-dev-local.dev.dvz-mv.net  ARG BASE_IMAGE_REPO=scm  ARG BASE_IMAGE_NAME=debian-bullseye  ARG BASE_IMAGE_TAG=latest    # Second stage - create runtime image  # -----------------------------------  #FROM debian:11 as base  #FROM docker-dev-local.dev.dvz-mv.net/scm/debian-bullseye:build-74 as base  FROM $REGISTRY/$BASE_IMAGE_REPO/$BASE_IMAGE_NAME:$BASE_IMAGE_TAG    # Maintainer  # ----------  LABEL org.opencontainers.image.authors="<somebody@somewhere.org>"    # Build Environment variables, change as needed  # -------------------------------------------------------------  ARG PG_MAJOR=14  ARG PG_VERSION=14.1  ARG DIST_VERSION=deb11  ARG DVZ_BUILD=dvz1  ENV DVZ_REPO_URL=http://dvzsn-rd1115.dbmon.rz-dvz.cn-mv.de/scb-repo    # Environment variables required for this build (do NOT change)  # -------------------------------------------------------------  ENV PG_MAJOR=${PG_MAJOR}  ENV PG_VERSION=${PG_VERSION}  ENV PGUSER=postgres  ENV PGDATABASE=postgres  ENV PGPORT=5432  ENV DBBASE=/opt/db  ENV PGBASE=$DBBASE/postgres  ENV PGBIN=$PGBASE/bin  ENV PGHOME=$PGBASE/postgresql  ENV PGDATA=$DBBASE/data/postgres/data  ENV PGLOG=$PGDATA/log  ENV PGBACK=$DBBASE/backup/postgres/backups  ENV PGARCH=$DBBASE/backup/postgres/archives    ENV PATH=$PGHOME/bin:$PATH    ENV LANG=de_DE.UTF-8  ENV LC_MESSAGES=en_US.UTF-8  ENV TZ=Europe/Berlin    RUN env | sort    # Install additional packages and dependencies  # --------------------------------------------  RUN set -ex; \      apt-get update && \      apt-get upgrade && \      apt-get install -y --no-install-recommends \          ca-certificates \          curl \          dirmngr \          gnupg \          iproute2 \          less \          libnss-wrapper \          libpam0g \          libreadline8 \          libselinux1 \          libsystemd0 \          libxml2 \          locales \          openssl \          procps \          vim-tiny \          wget \          xz-utils \          zlib1g \      && \      apt-get clean    # create locales for en_US and de_DE  RUN localedef -i en_US -f UTF-8 en_US.UTF-8 && \      localedef -i de_DE -f UTF-8 de_DE.UTF-8 && \      locale -a    # Set up user and directories  # ---------------------------  RUN mkdir -p $PGBASE $PGBIN $PGDATA $PGBACK $PGARCH && \      useradd -d /home/postgres -m -s /bin/bash --no-log-init postgres && \      chown -R postgres:postgres $PGBASE $PGDATA $PGBACK $PGARCH $DBBASE/data && \      chmod a+xr $PGBASE    # set up user env  # ---------------  USER postgres  COPY --chown=postgres:postgres ["files/.alias", "files/.bashrc", "files/postgresql.conf.${PG_MAJOR}", "files/conf.d/00-ina-default.conf", "/hom  COPY ["files/docker-entrypoint.sh", "/"]  ADD ["files/pg-docker-env.tar.gz", "$PGBASE/"]    # install postgres  # --------------------  # copy postgres package from builder stage  #RUN mkdir -p $PGBASE/postgresql-$PG_VERSION-$DIST_VERSION-$DVZ_BUILD  #COPY --from=build --chown=postgres:postgres ["$PGBASE/postgresql-$PG_VERSION-$DIST_VERSION-$DVZ_BUILD", "$PGBASE/postgresql-$PG_VERSION-$DIST_  # download build of postgres  WORKDIR $PGBASE  RUN curl -sSL $DVZ_REPO_URL/postgres/Linux/$DIST_VERSION/postgresql-$PG_VERSION-$DIST_VERSION-dvz1.tar.gz | tar xzf - -C $PGBASE  RUN ln -s $PGBASE/postgresql-$PG_VERSION-$DIST_VERSION-$DVZ_BUILD postgresql    # bindings  # --------  VOLUME ["$PGDATA", "$PGBACK", "$PGARCH"]  STOPSIGNAL SIGINT  EXPOSE 5432  HEALTHCHECK --interval=1m --start-period=5m \     CMD pg_ctl status >/dev/null || exit 1    # Define default command to start Database.  ENTRYPOINT ["/docker-entrypoint.sh"]  CMD ["postgres", "-D", "/opt/db/data/postgres/data"]  
#!/bin/bash  set -xeEuo pipefail    echo "starting up postgres docker image:"  echo "$@"    # check PGDATA directory and create if necessary  if [ \! -d $PGDATA ] || [ \! -O $PGDATA ]  then      mkdir -p $PGDATA      chmod 700 $PGDATA  fi    # check database cluster in PGDATA directory and create new db cluster if necessary  if [ \! -s $PGDATA/PG_VERSION ] || ! pg_controldata  then      POSTGRES_PASSWORD=${POSTGRES_PASSWORD:-"Start1234"}      initdb -D $PGDATA --locale=de_DE.UTF-8 --lc-messages=en_US.UTF-8 --auth-local=trust --auth-host=md5 --pwfile=<(echo "$POSTGRES_PASSWORD")      mv $PGDATA/postgresql.conf $PGDATA/postgresql.conf.orig      cp ~/postgresql.conf.${PG_MAJOR} $PGDATA/postgresql.conf      mkdir -p $PGDATA/conf.d      cp ~/00-ina-default.conf $PGDATA/conf.d/      {          echo "# allow connections via docker gateway or bridge"          echo "host    all             all             172.16.0.0/14           md5"      } >> "$PGDATA/pg_hba.conf"  fi    # show PGDATA version and controldata  echo "PGDATA/PGVERSION=`cat $PGDATA/PG_VERSION`"    # start postgres rdbms now  exec "$@"  

kubernetes declarations

kind: PersistentVolume  apiVersion: v1  metadata:    name: postgres-pgdata33    labels:      app: postgres      type: local  spec:    storageClassName: ina01    capacity:      storage: 1Gi    accessModes:      - ReadWriteOnce    hostPath:      path: "/var/data"  ---  kind: PersistentVolume  apiVersion: v1  metadata:    name: postgres-pgbackup33    labels:      app: postgres      type: local  spec:    storageClassName: ina01    capacity:      storage: 1Gi    accessModes:      - ReadWriteOnce    hostPath:    path: "/var/data"  ---  kind: PersistentVolume  apiVersion: v1  metadata:    name: postgres-pgarch33    labels:      app: postgres      type: local  spec:    storageClassName: ina01    capacity:      storage: 1Gi    accessModes:      - ReadWriteOnce    hostPath:      path: "/var/data"  # #####################################################################################  ---  kind: PersistentVolumeClaim  apiVersion: v1  metadata:    name: pgdata33-pvc    labels:      app: postgres  spec:    storageClassName: ina01    capacity:    accessModes:      - ReadWriteOnce    resources:      requests:        storage: 1Gi  ---  kind: PersistentVolumeClaim  apiVersion: v1  metadata:    name: pgbackup33-pvc    labels:      app: postgres  spec:    storageClassName: ina01    capacity:    accessModes:      - ReadWriteOnce    resources:      requests:        storage: 1Gi  ---  kind: PersistentVolumeClaim  apiVersion: v1  metadata:    name: pgarch33-pvc    labels:      app: postgres  spec:    storageClassName: ina01    capacity:    accessModes:      - ReadWriteOnce    resources:      requests:        storage: 1Gi  
apiVersion: v1  kind: ConfigMap  metadata:    name: postgres-configuration    labels:      app: postgres  data:    POSTGRES_DB: awesomedb    POSTGRES_USER: amazinguser    POSTGRES_PASSWORD: perfectpassword  
---  apiVersion: apps/v1  kind: StatefulSet  metadata:    name: postgres-stateful    labels:      app: postgres  spec:    serviceName: "postgres"    replicas: 1    selector:      matchLabels:        app: postgres    template:      metadata:        labels:          app: postgres      spec:        containers:        - name: postgres          image: docker-dev-local.dev.dvz-mv.net/ina/postgresql:14.1-scm-debian-bullseye-build-74-4          envFrom:          - configMapRef:              name: postgres-configuration          ports:          - containerPort: 5432            name: postgresdb          volumeMounts:          - name: pv-data            mountPath: /opt/db/data/postgres/data   # /var/lib/postgresql/data          - name: pv-backup            mountPath: /opt/db/backup/postgres          - name: pv-arch            mountPath: /opt/db/backup/postgres/arch        securityContext:          runAsUser: 1000          runAsGroup: 1000          fsGroup: 1000        volumes:        - name: pv-data          persistentVolumeClaim:            claimName: pgdata33-pvc        - name: pv-backup          persistentVolumeClaim:            claimName: pgbackup33-pvc        - name: pv-arch          persistentVolumeClaim:            claimName: pgarch33-pvc    
apiVersion: v1  kind: Service  metadata:    name: postgres-service    labels:      app: postgres  spec:    ports:    - port: 5432      name: postgres    type: NodePort    selector:      app: postgres  

AntiVirus for AWS Linux 2 EC2

Posted: 01 Feb 2022 02:04 AM PST

Is there antivirus software recommended to install AWS Linux 2 system?

In my infra, I'm using the following AWS services:: EC2 (Seoul, Ohio, and Virginia), Load Balancers, Target Groups, SecurityHub, Guard Duty, Lambda, Jenkins, RDS,k S3 , SES, Route53, CloudWatch, CloudTrail, ElasticSearch, CloudFront, DynamoDb, SNS, VPC, ACL, WAF, IAM.

I think Sometimes the packages we install and the application we set up and their dependencies may have security and vulnerability issues. The security and vulnerability issues can either be in a file or the packages we install and can be injected through the websites. To overcome such a scenario we need it in our system

Can anyone suggest to me which antivirus is best for my requirement? also, is there any documentation available on how to install the AV on AWS Linux 2 system?

Roundcube webmail on Debian 11 bullseye - Could not save password

Posted: 01 Feb 2022 02:11 AM PST

first question here, take me slow.

I have installed dovecot, postfix and Roundcube webmail in a VM to test a new mail server for the company I work for. All good until I try changing a password as a logged in user from roundcube settings->password.

What I have done:

  • Enabled password plugin in roundcube
  • Set the driver to "chpasswd" as my users are system users created with "useradd -m user password"
  • I have created a new file in sudoers.d and added this "www-data ALL=NOPASSWD:/usr/sbin/chpasswd " as I understand apache2 runs under www-data user and it need sudo privileges. Still after doing all these things, I get the same error "Could not save password"

No logs that I can find show me other information about the problem. If there is a specific log I should look into, please tell me and I will do. If any configuration should I provide to you, ask and I will provide. Thank you!

AWS i3en.3xlarge really low iops

Posted: 01 Feb 2022 02:27 AM PST

I just launched a new instance ec2 instance of type i3en.3xlarge. Operating system is Ubuntu. I mounted the NVMe Instance store but every speed test I run is incredible low at around 7k iops. What am I doing wrong?

Here are the steps I did:

1) Check available ssds with nvme -list:

---------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------  /dev/nvme0n1     vol012301587a8724842 Amazon Elastic Block Store               1           8.59  GB /   8.59  GB    512   B +  0 B   1.0       /dev/nvme1n1     AWS16AAAC6C7BFAC4972 Amazon EC2 NVMe Instance Storage         1           7.50  TB /   7.50  TB    512   B +  0 B   0  

2) create a new xfs file system for nvme1n1:

sudo mkfs -t xfs /dev/nvme1n1  

3) mount it to /home

sudo mount /dev/nvme1n1 /home  

4) check df -h:

    ubuntu@ip-172-31-35-146:/home$ df -h  Filesystem      Size  Used Avail Use% Mounted on  /dev/root       7.7G  2.8G  4.9G  37% /  devtmpfs         47G     0   47G   0% /dev  tmpfs            47G     0   47G   0% /dev/shm  tmpfs           9.4G  852K  9.4G   1% /run  tmpfs           5.0M     0  5.0M   0% /run/lock  tmpfs            47G     0   47G   0% /sys/fs/cgroup  /dev/loop0       25M   25M     0 100% /snap/amazon-ssm-agent/4046  /dev/loop3       43M   43M     0 100% /snap/snapd/14066  /dev/loop2       68M   68M     0 100% /snap/lxd/21835  /dev/loop1       56M   56M     0 100% /snap/core18/2284  /dev/loop4       62M   62M     0 100% /snap/core20/1242  /dev/loop6       56M   56M     0 100% /snap/core18/2253  /dev/loop5       44M   44M     0 100% /snap/snapd/14549  /dev/loop7       62M   62M     0 100% /snap/core20/1328  tmpfs           9.4G     0  9.4G   0% /run/user/1000  /dev/nvme1n1    6.9T   49G  6.8T   1% /home  

5)run test with fio:

fio -direct=1 -iodepth=1 -rw=randread -ioengine=libaio -bs=4k -size=1G -numjobs=1 -runtime=1000 -group_reporting -filename=iotest -name=Rand_Read_Testing  

Fio Results:

fio-3.16  Starting 1 process  Rand_Read_Testing: Laying out IO file (1 file / 1024MiB)  Jobs: 1 (f=1): [r(1)][100.0%][r=28.5MiB/s][r=7297 IOPS][eta 00m:00s]  Rand_Read_Testing: (groupid=0, jobs=1): err= 0: pid=1701: Sat Jan 29 22:28:17 2022    read: IOPS=7139, BW=27.9MiB/s (29.2MB/s)(1024MiB/36717msec)      slat (nsec): min=2301, max=39139, avg=2448.98, stdev=311.68      clat (usec): min=32, max=677, avg=137.06, stdev=26.98       lat (usec): min=35, max=680, avg=139.59, stdev=26.99      clat percentiles (usec):       |  1.00th=[   35],  5.00th=[   99], 10.00th=[  100], 20.00th=[  124],       | 30.00th=[  125], 40.00th=[  126], 50.00th=[  139], 60.00th=[  141],       | 70.00th=[  165], 80.00th=[  167], 90.00th=[  169], 95.00th=[  169],       | 99.00th=[  172], 99.50th=[  174], 99.90th=[  212], 99.95th=[  281],       | 99.99th=[  453]     bw (  KiB/s): min=28040, max=31152, per=99.82%, avg=28506.48, stdev=367.13, samples=73     iops        : min= 7010, max= 7788, avg=7126.59, stdev=91.80, samples=73    lat (usec)   : 50=1.29%, 100=9.46%, 250=89.19%, 500=0.06%, 750=0.01%    cpu          : usr=1.43%, sys=2.94%, ctx=262144, majf=0, minf=12    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%       issued rwts: total=262144,0,0,0 short=0,0,0,0 dropped=0,0,0,0       latency   : target=0, window=0, percentile=100.00%, depth=1    Run status group 0 (all jobs):     READ: bw=27.9MiB/s (29.2MB/s), 27.9MiB/s-27.9MiB/s (29.2MB/s-29.2MB/s), io=1024MiB (1074MB), run=36717-36717msec    Disk stats (read/write):    nvme1n1: ios=259894/5, merge=0/3, ticks=35404/0, in_queue=35404, util=99.77%  

According to benchmarks like here the iops performance should be way better.

So am I missing something here?

Thanks in advance

Docker network timeouts when using bridge

Posted: 01 Feb 2022 02:09 AM PST

I'm running on a dedicated server ​with Ubuntu version 20.04.3 LTS (kernel 5.4.0-96-generic) and Docker 20.10.7, build 20.10.7-0ubuntu5~20.04.2. The system is a fresh install.

I have a Dockerfile for one of my services, which pulls some libraries in with apt and go get. One of the intermediate containers always fails to connect to the internet with either DNS or TCP Timeout errors. Which one of the containers fails is completely random.

Also note that the problem is not with one specific service, I tried building a completely different service which runs on NodeJS and the npm install failed with the same errors

Today I also had the problem that my Nginx container was not reachable with. All connections to it resulted in timeout errors.

Connections between containers using docker networks also don't work correctly.

Running sudo systemctl restart docker temporarily fixes the problem, but it reappears one or two builds down the line. When I build with the host network instead of the default bridge network, the problem is gone, which is why I suspected a faulty bridge config.

I've tried reinstalling Docker, resetting the iptables and bridge configs, setting different DNS servers, to no avail. The docker log files show no errors.

What could be the cause of this issue?

Update:

I've disabled UFW, but had no success. This is a dump from my dmesg log during a build that timed out, maybe this helps identify the cause:

[758001.967161] docker0: port 1(vethd0c7887) entered blocking state  [758001.967165] docker0: port 1(vethd0c7887) entered disabled state  [758001.967281] device vethd0c7887 entered promiscuous mode  [758002.000567] IPv6: ADDRCONF(NETDEV_CHANGE): veth7e3840a: link becomes ready  [758002.000621] IPv6: ADDRCONF(NETDEV_CHANGE): vethd0c7887: link becomes ready  [758002.000644] docker0: port 1(vethd0c7887) entered blocking state  [758002.000646] docker0: port 1(vethd0c7887) entered forwarding state  [758002.268554] docker0: port 1(vethd0c7887) entered disabled state  [758002.269581] eth0: renamed from veth7e3840a  [758002.293056] docker0: port 1(vethd0c7887) entered blocking state  [758002.293063] docker0: port 1(vethd0c7887) entered forwarding state  [758041.497891] docker0: port 1(vethd0c7887) entered disabled state  [758041.497997] veth7e3840a: renamed from eth0  [758041.547558] docker0: port 1(vethd0c7887) entered disabled state  [758041.551998] device vethd0c7887 left promiscuous mode  [758041.552008] docker0: port 1(vethd0c7887) entered disabled state  

run ngrok using subprocess. how to use subprocess with both ip address and port?

Posted: 01 Feb 2022 02:17 AM PST

def runNgrok():     ngrokDir = "/home/ubuntu/ngrokFunctionalities"     port = 8081     ngrok_command = "ngrok"     make_executable = str(Path(ngrokDir, ngrok_command))     ngrok = subprocess.Popen([make_executable, 'http', 127.0.0.2, '-inspect=false','-bind-tls=true', port])     atexit.register(ngrok.terminate)     time.sleep(7)     return True    File "ngrokRunKeepAlive.py", line 25  ngrok = subprocess.Popen([make_executable, 'http', 127.0.0.2, '-inspect=false','-bind-tls=true', port])                                                         ^  SyntaxError: invalid syntax                                    

How to implement Continuous Integration with Puppet and multiple services

Posted: 01 Feb 2022 02:26 AM PST

We're trying to implement a Continuous Integration pipeline in our environment. We have a lot of different services, each with its own Git repository. Deployment is done via Puppet, using an external node classifier that determines what classes to deploy to each host type. And the Puppet files are sitting in their own Git repo, as depicted here:

Puppet and Git

Only, it's not just 3 services, it's more like 100. So the Puppet project is this monstrous monolith of multiple manifests, and of course it's in its own independent Git repo.

Now, along comes lil' ol' me, tasked with setting up a pattern for CI, so that when someone requests to merge a branch from, say, Service A, into master, we should be able to kick off a CI build that will spin up a virtual environment, deploy Service A to some VMs, and ensure that the new branch passes all automated tests. Of course, the problem is that to deploy a new build of Service A, I not only have to build it, but I also have to update the Puppet manifest to refer to the new build version...and the Puppet files are sitting in a completely independent repo, not on my branch. So I don't have any easy way to tell the Puppet Master that for this branch, we need to use the CI build, not the master version.

I can't be the first person ever to want to set up CI for an environment like this, but I've searched the web for solutions and come up empty. Perhaps I'm using the wrong search terms.

Can anyone suggest a suitable design pattern that will enable me to implement CI for all my service repos?

Dell PERC H750 compatibility with Debian

Posted: 01 Feb 2022 01:50 AM PST

I've been using Debian on Dell servers for many years. For a long time I've been using the PERC H730P RAID controller, which is weel supported by utilities like MegaCLI.

Recently I've bought a R440 Server with the new H750 Raid controller. I've initially been able to install Debian 11 on logical volumes created from the "BIOS" System Setup. But after a few minutes/hours of using the server to configure the software side, the disks suddenly disappeared.

At boot, Grub was still working, but the Debian boot sequence would stop, unable to find partitions.

The LifeCycleController wouldn't report any hardware issue. But the « Support Live Image » (a liveCD provided by Dell) would not see any storage controller.

The tech support told me that this new RAID controller is not compatible with Debian (nor CentOS 7, which is on the SLI liveCD), and I have to ask for a replacement by an older but compatible H730P.

I'm writting this because I couln't find anything online regarding Debian compatibility with recent Dell Raid controllers.

Hope this helps.

Update 2022-01-31

I've managed to reinstall a fresh Debian 11.2 without issues. I have then installed a backported 5.15.5 kernel (over the default 5.10.0). Everything seems fine.

But, when I install MegaCLI, the installation process freezes the whole server. After many Ctrl-C and a few minutes later, I get a shell back. "megaclisas-status" hangs on "-- Controller information --". After a round of Ctrl-C, I get the shell back.

If I try to purge the "megaclisas-status" and "megacli" packages, everything is frozen again.

I've just opened an issue on their tracker : https://github.com/eLvErDe/hwraid/issues/130

Update 2022-02-01:

My issue ha been rejected, stating that this is a kernel issue.

I've reinstalled all the OS with the 5.15 kernel, and did a bunch of stress test and benchmarks. Everything seems to be OK.

Then I've installed the "megacli" tool and used it with a few commands ; no issue.

Then I've installed the "megaclisas-status" package, and the server freezes when installing the package. After a hard-reboot, I can use the system again, but the "megaclisas-status" package is not installed.

AWS/Strongswan-Ubuntu Site to Site Tunnel Cannot Ping Remote

Posted: 01 Feb 2022 01:43 AM PST

Ubuntu (Linode) Strongswan 5.6.2 Connecting to AWS (site to site).

  1. I can ping from AWS endpoint to Ubuntu VPN.
  2. I cannot ping from AWS endpoint to Ubuntu endpoint.
  3. I cannot ping from Ubuntu VPN to AWS anything.

Ubuntu (VPN) public: 1.2.3.4 | Ubuntu (VPN) private: 192.168.234.113/24

AWS (VPN) public: 4.5.6.7 | AWS (VPN) private: 169.254.177.44/30

AWS (endpoint) private: 10.11.1.197

Ubuntu (endpoint) private: 192.168.136.15

I can ping the tunnel adapter's 169.254.177.46 from ubuntu (local), but not the remote 169.254.177.45 which I assume is the customer gateway (destination host unreachable)

root@ubuntu:~# ping 10.11.1.197  PING 10.11.1.197 (10.11.1.197) 56(84) bytes of data.  From 169.254.177.46 icmp_seq=1 Destination Host Unreachable  From 169.254.177.46 icmp_seq=2 Destination Host Unreachable  

ip a

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000      link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00      inet 127.0.0.1/8 scope host lo         valid_lft forever preferred_lft forever      inet6 ::1/128 scope host         valid_lft forever preferred_lft forever  2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000      link/ether f2:3c:93:db:4d:c0 brd ff:ff:ff:ff:ff:ff      inet 1.2.3.4/24 brd 194.195.211.255 scope global eth0         valid_lft forever preferred_lft forever      inet 192.168.234.113/17 brd 192.168.255.255 scope global eth0         valid_lft forever preferred_lft forever      inet6 2600:3c02::f03c:93ff:fedb:4dc0/64 scope global dynamic mngtmpaddr noprefixroute         valid_lft 60sec preferred_lft 20sec      inet6 fe80::f03c:93ff:fedb:4dc0/64 scope link         valid_lft forever preferred_lft forever  3: ip_vti0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000      link/ipip 0.0.0.0 brd 0.0.0.0  6: Tunnel1@NONE: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1419 qdisc noqueue state UNKNOWN group default qlen 1000      link/ipip 1.2.3.4 peer 4.5.6.7      inet 169.254.177.46 peer 169.254.177.45/30 scope global Tunnel1         valid_lft forever preferred_lft forever      inet6 fe80::200:5efe:c2c3:d3cb/64 scope link         valid_lft forever preferred_lft forever  

routes

10.11.1.0              0.0.0.0         255.255.255.0   U     100    0        0 Tunnel1  169.254.177.44  0.0.0.0         255.255.255.252 U     0      0        0 Tunnel1  192.168.128.0    0.0.0.0         255.255.128.0   U     0      0        0 eth0  194.195.211.0    0.0.0.0         255.255.255.0   U     0      0        0 eth0  

xfrm policy

src 192.168.128.0/17 dst 0.0.0.0/0          dir out priority 391295          mark 0x64/0xffffffff          tmpl src 1.2.3.4 dst 4.5.6.7                  proto esp spi 0xcdecfff9 reqid 1 mode tunnel  src 0.0.0.0/0 dst 192.168.128.0/17          dir fwd priority 391295          mark 0x64/0xffffffff          tmpl src 4.5.6.7 dst 1.2.3.4                  proto esp reqid 1 mode tunnel  src 0.0.0.0/0 dst 192.168.128.0/17          dir in priority 391295          mark 0x64/0xffffffff          tmpl src 4.5.6.7 dst 1.2.3.4                  proto esp reqid 1 mode tunnel  src 0.0.0.0/0 dst 0.0.0.0/0          socket in priority 0  src 0.0.0.0/0 dst 0.0.0.0/0          socket out priority 0  src 0.0.0.0/0 dst 0.0.0.0/0          socket in priority 0  src 0.0.0.0/0 dst 0.0.0.0/0          socket out priority 0  src ::/0 dst ::/0          socket in priority 0  src ::/0 dst ::/0          socket out priority 0  src ::/0 dst ::/0          socket in priority 0  src ::/0 dst ::/0          socket out priority 0  

How can I get nginx not to override x-forwarded-for when proxying?

Posted: 01 Feb 2022 01:40 AM PST

I have an nginx server behind a load balancer, the nginx server passes requests on to a variety of services, but in this case a docker container running apache. The load balancer sets an X-Forwarded-For correctly, but by the time it gets to the docker container, X-Forwarded-For has been set to the LB IP.

I have this in nginx config:

/etc/nginx/conf.d/real_ip.conf  set_real_ip_from {{LB IP}};  real_ip_header X-Real-IP;  real_ip_recursive on;  

and this is the virtualhost:

server {      listen 443 ssl;      listen [::]:443 ssl;      server_name *.domain domain;      include /etc/nginx/snippets/domain_ssl.conf;      add_header X-Nginx-Debug "hi";      proxy_pass_request_headers on;      location    / {      proxy_pass_request_headers on;      proxy_pass  http://container-php;      proxy_http_version 1.1;      proxy_set_header Upgrade $http_upgrade;      proxy_set_header Connection "upgrade";      proxy_set_header Host $http_host;      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;      proxy_set_header X-Remote-Addr $remote_addr;      proxy_set_header X-Real-IP $http_x_real_ip;      proxy_set_header X-Header-Test "Hello World - $http_x_forwarded_for";      proxy_set_header X-Forwarded-Proto $scheme;    }  }  

But what I get from the container is:

array(19) {    ["Connection"]=>    string(7) "upgrade"    ["Host"]=>    string(19) "domain"    ["X-Forwarded-For"]=>    string(12) "{{LB IP}}"    ["X-Header-Test"]=>    string(13) "Hello World -"    ["X-Forwarded-Proto"]=>    string(5) "https"    ["cache-control"]=>    string(9) "max-age=0"    ["sec-ch-ua"]=>    string(64) "" Not;A Brand";v="99", "Google Chrome";v="97", "Chromium";v="97""    ["sec-ch-ua-mobile"]=>    string(2) "?0"    ["sec-ch-ua-platform"]=>    string(9) ""Windows""    ["upgrade-insecure-requests"]=>    string(1) "1"    ["user-agent"]=>    string(114) "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.71 Safari/537.36"    ["accept"]=>    string(135) "text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9"    ["sec-fetch-site"]=>    string(4) "none"    ["sec-fetch-mode"]=>    string(8) "navigate"    ["sec-fetch-user"]=>    string(2) "?1"    ["sec-fetch-dest"]=>    string(8) "document"    ["accept-encoding"]=>    string(17) "gzip, deflate, br"    ["accept-language"]=>    string(26) "en-GB,en-US;q=0.9,en;q=0.8"  }  

Notably X-Real-IP, X-Fowarded-For don't seem to be set, nor does remote_addr. Files served directly from nginx have x-forwarded-for set properly, so the LB is sending down the right header.

Have I missed a step?

504 Gateway Time-out on NGINX Reverse Proxy even though the container is up

Posted: 01 Feb 2022 02:01 AM PST

I have the following Docker setup:

  • jwilder/nginx-proxy for the reverse proxy

  • jrcs/letsencrypt-nginx-proxy-companion for SSL (Let's Encrypt)

  • custom WildFly container as the endpoint

My problem is that when visiting the website a 504 error gets thrown out. I give environment variables to the WildFly container containing multiple VIRTUAL_HOST, LETSENCRYPT_HOST and LETSENCRYPT_EMAIL. I tried exposing the ports but that did not help. Port 8080 gets shown in docker ps -a. The weight, max_fails etc is from a tutorial I found online because it wasn't working for me and I thought it would fix it. Using curl IP:8080 gives a successful response.

My Nginx config in the container:

# wildfly.example.com  upstream wildfly.example.com {                                  # Cannot connect to network of this container                                  server 172.17.0.5:8080 weight=100 max_fails=5 fail_timeout=5;  }  server {          server_name wildfly.example.com;          listen 80 ;          access_log /var/log/nginx/access.log vhost;          # Do not HTTPS redirect Let'sEncrypt ACME challenge          location /.well-known/acme-challenge/ {                  auth_basic off;                  allow all;                  root /usr/share/nginx/html;                  try_files $uri =404;                  break;          }          location / {                  return 301 https://$host$request_uri;          }  }  server {          server_name wildfly.example.com;          listen 443 ssl http2 ;          access_log /var/log/nginx/access.log vhost;          ssl_session_timeout 5m;          ssl_session_cache shared:SSL:50m;          ssl_session_tickets off;          ssl_certificate /etc/nginx/certs/wildfly.example.com.crt;          ssl_certificate_key /etc/nginx/certs/wildfly.example.com.key;          ssl_dhparam /etc/nginx/certs/wildfly.example.com.dhparam.pem;          ssl_stapling on;          ssl_stapling_verify on;          ssl_trusted_certificate /etc/nginx/certs/wildfly.example.com.chain.pem;          add_header Strict-Transport-Security "max-age=31536000" always;          include /etc/nginx/vhost.d/default;          location / {                  proxy_pass http://wildfly.example.com;                  proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;                  proxy_set_header Host $server_addr:$server_port;                  proxy_set_header X-Real-IP $remote_addr;          }  }    

P.S the comment that it cannot connect to the network exists because it did not automatically detect the server and I had to manually edit the internal IP. My docker logs nginxcontainerid output:

2020/06/04 14:14:37 [error] 22247#22247: *6228 upstream timed out (110: Connection timed out) while connecting to upstream, client: IPHERE, server: wildfly.example.com, request: "GET / HTTP/2.0", upstream: "http://172.17.0.5:8080/", host: "wildfly.example.com"  

Importing a certificate says import successful but I can't see it anywhere in MMC

Posted: 01 Feb 2022 03:06 AM PST

A client gave us their certificate, a .cer file. I right click it, choose install certificate, choose Local Machine and Automatically select the certificate store to import it. After a while (takes about a minute for some reason) it pops up saying the import was successful.

When I open MMC and the certificates snapin, and choose local machine, I can't find the certificate anywhere.

Did it actually import? If so, where is it? I would expect it to appear in the personal store.

This isn't the first time I've had this problem. Fair enough if it put it into the wrong place, but I can't see it in any of the folders.

Zimbra (open source) how can I perform a backup?

Posted: 01 Feb 2022 01:04 AM PST

Good day everyone

So I am trying to perform a backup on out zimbra server and I found documentation (Zimbra Backup Proceedures ) and realised that the scripts aren't working correctly.

Scripts from documentation

runBackupAll.sh :

echo "*******************************************************"  echo "*     Zimbra - Backup all email accounts              *"  echo "*******************************************************"  echo""  #  echo Start time of the backup = $(date +%T)    before="$(date +%s)"  #  echo ""  ZHOME=/opt/zimbra  ZBACKUP=$ZHOME/backup/mailbox  echo "Generating backup files ..."  su - zimbra -c "/opt/backup/SCRIPT_ZIBRA_BACKUP_ALL_ACCOUNTS/zimbra_backup_allaccounts.sh"  echo "Sending files to backup all email accounts for Machine2 (10.0.0.X - CrossOver Cable on eth1 \o/ ) ..."  rsync -avH $ZBACKUP root@ipaddress:/opt/zimbra_backup_accounts  before2="$(date +%s)"  #  echo The process lasted = $(date +%T)  # Calculating time  after="$(date +%s)"  elapsed="$(expr $after - $before)"  hours=$(($elapsed / 3600))  elapsed=$(($elapsed - $hours * 3600))  minutes=$(($elapsed / 60))  seconds=$(($elapsed - $minutes * 60))  echo The complete backup lasted : "$hours hours $minutes minutes $seconds seconds"  

Second Script:

* Script 2   zimbraBackupAllAccounts.sh    ZHOME=/opt/zimbra  ZBACKUP=$ZHOME/backup/mailbox  ZCONFD=$ZHOME/conf  DATE=`date +"%a"`  ZDUMPDIR=$ZBACKUP/$DATE  ZMBOX=/opt/zimbra/bin/zmmailbox  if [ ! -d $ZDUMPDIR ]; then  mkdir -p $ZDUMPDIR  fi  echo " Running zmprov ... "         for mbox in `zmprov -l gaa`  do  echo " Generating files from backup $mbox ..."         $ZMBOX -z -m $mbox getRestURL "//?fmt=zip" > $ZDUMPDIR/$mbox.zip  done  

This script fails on this section.

    echo " Running zmprov ... "         for mbox in `zmprov -l gaa`  do  echo " Generating files from backup $mbox ..."         $ZMBOX -z -m $mbox getRestURL "//?fmt=zip" > $ZDUMPDIR/$mbox.zip  

The following command returns ...

zmmailbox -z -m bob@mail.somehost.com -t 0 getRestURL "/inbox?fm ERROR: zclient.IO_ERROR (Unable to get REST resource from https://FQDN/home/bob@mail.somehost.com/inbox?fmt=zip: FQDN) (cause: java.net.UnknownHostException FQDN)

I noticed I can download my emails for myself when I am logged in through the web interface. https://mail.somedomain.com/home/bob///?fmt=tgz .

I need though to be able to access them all and obviously without logging into each and every account.

How can I backup everyone's emails? From what I understand, the script fails because it wants a FQDN but I cannot set this parameter or at least from what I've tried , has yielded no results.

Cannot ping wired devices from wireless device

Posted: 01 Feb 2022 03:06 AM PST

I have a small home network with the following configuration:

  • 192.168.1.254 -> Gateway/DHCP/DNS
  • 192.168.1.1 - 192.168.1.127 -> DHCP Range
  • 192.168.1.215 - 192.168.1.253 -> Various IPs in this range are used for static IP devices.

The problem is that I have one laptop that, when connected wirelessly, cannot ping (or detect at all) wired devices. I receive the 'Destination Host Unreachable' error. The device I am trying to ping has an IP of 192.168.1.244. To be clear, I have tested with other laptops and they can ping 192.168.1.244 while connected via wireless. My iPhone also sees the device with that IP when testing using a network scanner app. It is a problem SPECIFIC to this machine. It is also specific to the wireless interface; if I use an ethernet cable, I can ping the IP just fine.

Some more details on what I have tried:

  • Update the wireless card drivers (Dell wireless 1901 card on Windows 10)
  • Updating Windows
  • Give the laptop a static IP
  • Let the laptop get an IP from DHCP
  • Disable the ethernet interface
  • arp -a results in wireless devices and the gateway, but no wired devices.
  • Tracert also results in the 'Destination Host Unreachable' error.
  • The router is a U-Verse router, I checked everywhere on the admin page to see if the device was in some sort of quarantine.
  • Reinstall Windows (Not an upgrade where you keep all your files - I wiped the disk and started over)

I have checked other questions (like this one) but I know that my devices are on the same subnet AND I know that wired devices can communicate with wireless devices on my network - again, this is the only device I have this issue with. When I have a chance, I'm going to try arp -s and manually add the device and see if that works. After that, I don't know what to chalk this up to besides a faulty or dying network card.

What am I missing?

UPDATE: The wireless card supports 2.4GHz and 5GHz. I have found that if I connect to the 5GHz network, this issue is resolved. I have most devices connected to the 2.4GHz network, so I know that is not the issue.

Virtual box linux routing

Posted: 01 Feb 2022 03:09 AM PST

Sorry for the basic question but I cant figure this one out. I want to set up a small network of linux servers for testing purposes.

So I have a host server running virtual box with the following interface:

wlan0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500          inet 192.168.0.4  netmask 255.255.255.0  broadcast 192.168.0.255  

Then a guest vm with the following networking set up:

eth0      Link encap:Ethernet  HWaddr 08:00:27:EA:15:4F              inet addr:192.168.0.2  Bcast:192.168.0.255  Mask:255.255.255.0    eth1      Link encap:Ethernet  HWaddr 08:00:27:E3:E2:BC              inet addr:172.16.0.1  Bcast:172.16.7.255  Mask:255.255.248.0  

And a second vm guest set up as follows:

eth0      Link encap:Ethernet  HWaddr 08:00:27:15:CA:14              inet addr:172.16.0.2  Bcast:172.16.7.255  Mask:255.255.248.0            inet6 addr: fe80::a00:27ff:fe15:ca14/64 Scope:Link  

I want to be able to route from vm 2 back to the host server. So I created a route telling vm 2 to send traffic for the 192.169.0.0 network via vm 1:

% route -n  Kernel IP routing table  Destination     Gateway         Genmask         Flags Metric Ref    Use Iface  192.168.0.0     172.16.0.1      255.255.255.0   UG    0      0        0 eth0  172.16.0.0      0.0.0.0         255.255.248.0   U     0      0        0 eth0  169.254.0.0     0.0.0.0         255.255.0.0     U     1002   0        0 eth0  

But I can not ping through to the 192.168.0.0 network from vm 2. Routing table on vm 1 is as follows:

% route -n  Kernel IP routing table  Destination     Gateway         Genmask         Flags Metric Ref    Use Iface  192.168.0.0     0.0.0.0         255.255.255.0   U     0      0        0 eth0  172.16.0.0      0.0.0.0         255.255.248.0   U     0      0        0 eth1  169.254.0.0     0.0.0.0         255.255.0.0     U     1002   0        0 eth0  169.254.0.0     0.0.0.0         255.255.0.0     U     1003   0        0 eth1  0.0.0.0         192.168.0.1     0.0.0.0         UG    0      0        0 eth0  

the routing table on the host server (running virtual box) is :

% route -n  Kernel IP routing table  Destination     Gateway         Genmask         Flags Metric Ref    Use Iface  0.0.0.0         192.168.0.1     0.0.0.0         UG    0      0        0 wlan0  192.168.0.0     0.0.0.0         255.255.255.0   U     0      0        0 wlan0  

so I guess my problem is that the host server knows nothing of my VM's 172.16.0.0/16 network and can't reply.

How can I fix this, will I have to use iptables and NATing?

ASP.NET 4 IIS 7 web server timeout

Posted: 01 Feb 2022 01:04 AM PST

I have 3 applications on the same web server. Two of them are configured in separate ASP.NET 4 application pools and and one of them is on an ASP.NET 2 application pool.

I'm experiencing intermitent timeouts when accessing those apps during the day. To track down this timeouts, I have setup a ping monitoring service (motive.com). Here is a sample of the timeout ocurrencies log:

  app     date                            downtime        main reason  APP2    19-September-2012, at 14:51 4 mins 50 secs  connect() timed out!  APP1    19-September-2012, at 14:51 4 mins 50 secs  connect() timed out!  APP2    19-September-2012, at 14:11 2 mins 50 secs  couldn't connect to host  APP1    19-September-2012, at 14:11 2 mins 50 secs  couldn't connect to host  APP2    19-September-2012, at 9:17  2 mins 41 secs  couldn't connect to host  APP1    19-September-2012, at 9:17  2 mins 41 secs  couldn't connect to host  ...  

As you can see, both ASP.net 4 pools are timing out simultaneously. I'm also monitoring the ASP.NET 2.0 app pool web site and I haven't have one single timeout!

There's no pattern whatsoever related to the time of the day that it occurs (both day/night). Intervals between timeouts don't follow a pattern either, sometimes they happen after 40 minutes, others take some hours in between.

The timeout never lasts more than 5 minutes, but they also vary from 2 to 5 minutes randomly.

At first I thought it might have something to do with application pool recycling but I've checked and recycling is set to occur after 24 hours and disabled for other events (memory peaks, etc.).

The site is infrequently accessed (it's in beta test), so there are no huge number of access, workers demand, memory consumption, etc.

I've also checked the IIS log and there are 2 to 5 minutes gaps during the hours of downtime reported by the monitoring service, but no error message. I also checked the Windows event log and haven't found anything unusual in system and application events.

I'm really desperate right now. If someone could help me out, I'd be really thankful.

Best Regards. Eduardo de Freitas

No comments:

Post a Comment