Tuesday, June 28, 2022

Recent Questions - Server Fault

Recent Questions - Server Fault


Best region for server to serve US

Posted: 28 Jun 2022 05:21 AM PDT

I want to create a mongodb atlas instance and google cloud instance to serve US region. Is there a rule of thumb which region (east, center, south) or city that is the best choice to serve entire US with lowest latency?

Mails always gets delivered to second MX record

Posted: 28 Jun 2022 04:29 AM PDT

I have setup two MX records:

Name Type Pref Value
mail MX 5 server1.tld.
mailbackup MX 10 server2.tld.

When I try to send test mails from external servers like GMail etc. they always get delivered to server2.tld. But server1.tld is working correctly, as soon as I disable the mailbackup MX record. Could it be a latency issue of server1.tld? How is a connection being checked from the delivering server's view?

Apache: Where are 421 errors logged?

Posted: 28 Jun 2022 04:50 AM PDT

I have a very strange problem with 421 errors, but I cannot debug it because I cannot locate the logs. It is not in the ErrorLog file which is defined in the httpd.conf file and also not in the CustomLog file which is defined in the vhost file.

How can logging for 421 errors be turned on and is it possible to get strings of the mismatching "requested host name" and "Server Name Identification" ?

How to get a list of all active hooks in woocommerce?

Posted: 28 Jun 2022 04:27 AM PDT

I have a plugin that inserts data after an woocomrce hook. Since most developers overwrite those and use them with a different priority than initially used for classic woocommerce themes, I need to find out programmatically what is the priority level.

Is there a function that will retrieve all active hooks and their priority, in an array maybe?

error 400 4.4 7 message delayed, Emails are bouncing back due to delivery delay

Posted: 28 Jun 2022 04:18 AM PDT

I am facing some issue with my exchange server 2013. when any user sent email to other domain (

inside our domain emails are delivering properly. ) , email is bouncing back due to delivery delay issue. when check Queue Viewer i can see emails stuck there , Error code is "error 400 4.4 7 message delayed" But, when i restart exchange server, all queued emails will send automatically. after restart, next 30 minutes emails will be delivering normally, then again it will stuck in QUEUE.

Note: There is issues for receiving emails. only sending email is facing issue.

when i check about, error 400.4.47, its related to blacklisted ip. i check our IP address using mxtoolbox, yes it was blacklisted by SPAMHAUS. i contacted them and dislisted. it was two days back, after that also i cant sent email.

Additional details:

Our Email is hosted in third party server. and we are downloading that emails in to our exchange server using "igetmail downloader". I contacted Our email hosting company, they said, Email are not receiving to their server, its stuck inside my exchange server.

Dear Experts, do you have any idea, what might be the problem, hope anybody can help me to fix this issue.

Thanks you

Mail delivery Failed report given bellow.

Delivery is delayed to these recipients or groups:

sayi sayi (jiyasmlp@gmail.com)

Subject:

This message hasn't been delivered yet. Delivery will continue to be attempted.

The server will keep trying to deliver this message for the next 1 days, 19 hours and 53 minutes. You'll be notified if the message can't be delivered by that time.

Diagnostic information for administrators:

Generating server: EXCHANGE.LevantDXB.Local

jiyasmlp@gmail.com Remote Server returned '400 4.4.7 Message delayed'

Original message headers:

Received: from EXCHANGE.LevantDXB.Local (192.168.1.16) by Exchange.LevantDXB.Local (192.168.1.16) with Microsoft SMTP Server (TLS) id 15.0.775.38; Mon, 27 Jun 2022 17:43:03 +0400 Received: from EXCHANGE.LevantDXB.Local ([::1]) by Exchange.LevantDXB.Local ([::1]) with mapi id 15.00.0775.031; Mon, 27 Jun 2022 17:43:03 +0400 From: jiyas jiyas@levant.com To: sayi sayi jiyasmlp@gmail.com Subject: Thread-Index: AdiKK9EA43KZiLJETUicX+H1d0fEYA== Date: Mon, 27 Jun 2022 13:43:02 +0000 Message-ID: e554cf751b914cf98279a6fc511a79ad@Exchange.LevantDXB.Local Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: yes X-MS-TNEF-Correlator: x-originating-ip: [192.168.1.75] Content-Type: multipart/related; boundary="013_e554cf751b914cf98279a6fc511a79adExchangeLevantDXBLocal"; type="multipart/alternative" MIME-Version: 1.0

postfix incorrectly redirects external emails (different domain to my own) to a user on my server @ my host name but sendmail test works

Posted: 28 Jun 2022 04:11 AM PDT

Problem

  • postfix will succesfully send emails which have the same domain name ending as the server itself
  • emails that have a different domain name like ending in gmail.com don't get through

In both cases - successful and unsuccessful I see: warning: dict_nis_init: NIS domain name not set - NIS lookups disabled in the /var/log/maillog log.

I'm aware of the settings files in /etc/postfix but I don't know what to look for in terms of a rule that says: "don't send emails with different domain name ending".

On same server the following works:

echo "Subject: test" | /usr/sbin/sendmail <an-email-address>@gmail.com

why would this be the case and what settings is sendmail using, where could I find them?

Research so far

`dpkg -l` to only show installed packages?

Posted: 28 Jun 2022 04:58 AM PDT

On Debian/Ubuntu, dpkg can show me packages, include the version and what's installed (e.g. dpkg -l postgresql\*). But that shows all packages that it knows about, whether installed or not.

Is it possible to make dpkg only show me the installed packages?

I've tried on dpkg v1.21.8 on Debian testing/bookworm, and v1.19.0.5 on Ubuntu 18.04 bionic

Microsoft Exchange (outlook outbound protection) is always failing with Exim DKIM bodyhash mismatch

Posted: 28 Jun 2022 05:18 AM PDT

I am noticing on our servers that since we set DKIM to reject failures on Exim (v4.95) that a LOT of genuine emails are being rejected.

Looking at the Exim rejectlogs we see that persistantly these emails are rejected if they're being handled by Microsoft Exchange/ OutlookProtection Servers:

Exim Rejectlog:

2022-06-27 11:36:33 1o5m5d-0006bD-82 H=mail-eopbgr20129.outbound.protection.outlook.com (EUR02-VE1-obe.outbound.protection.outlook.com) [42.107.2.129]:54704 X=TLS1.2:ECDHE-ECDSA-AES256-GCM-SHA384:256 CV=no rejected DKIM :   DKIM: encountered the following problem validating 01473280699.onmicrosoft.com: bodyhash_mismatch  Envelope-from: <******@######.co.uk>  Envelope-to: <******@_!_!_!_!_!_.co.uk>  P Received: from mail-eopbgr20129.outbound.protection.outlook.com ([42.107.2.129]:54704 helo=EUR02-VE1-obe.outbound.protection.outlook.com)      by base.ourServer.co.uk with esmtps  (TLS1.2) tls TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384      (Exim 4.95)      (envelope-from <******@######.co.uk>)      id 1o5m5d-0006bD-82      for ******@_!_!_!_!_!_.co.uk;      Mon, 27 Jun 2022 11:36:33 +0100    ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=fail;  

See:

DKIM: encountered the following problem validating 01473280699.onmicrosoft.com: bodyhash_mismatch

This only happens with messages coming from addresses via outbound.protection.outlook.com and similar variations.

Another similar -but not identical- example:

Exim rejectlog:

2022-06-17 12:29:04 1o14hL-0005Jq-Jy H=mail12.rbs.com (remlvdmzma04.rbs.com) [159.136.80.93]:56531 X=TLS1.2:ECDHE-ECDSA-AES256-GCM-SHA384:256 CV=no rejected DKIM : DKIM: encountered the following problem validating rbsworkspace.onmicrosoft.com: bodyhash_mismatch  Envelope-from: <prvs=1573f1310=named@domain.com>  Envelope-to: <******@_!_!_!_!_!_.co.uk>  P Received: from mail12.rbs.com ([159.136.80.93]:56531 helo=remlvdmzma04.rbs.com)      by base.ourServer.co.uk with esmtps  (TLS1.2) tls TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384      (Exim 4.95)      (envelope-from <prvs=1573f1310=named@domain.com>)      id 1o14hL-0005Jq-Jy      for ******@_!_!_!_!_!_.co.uk;      Fri, 17 Jun 2022 12:29:04 +0100    DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;    d=domain.com; i=@domain.com; q=dns/txt; s=mail;    t=1655206023; x=1686742023;    h=from:to:subject:date:message-id:mime-version;    bh=3k6znbu3nEVhMGEdZiqBMSTBU7K2v4atTAzapxLvBCU=;    b=YzXTzO3up95x48MLGbmJPHw5k7nAeWwYzPKK5rJMX1LG7EjERZjRw/vQ     +H50m8i3SDz0DuGrK5EvvzzSjhfgMlxJLqulwcuZw4Bp9v9lH5tgWS7Pv     PFHQtkSIvK/TdVQj4x9jGkbBwfHVHu/qgGuyf0TJZ6CN9gHY/HFzA5t3d     c=;    IronPort-SDR: K+m4jq9fjUdFtsVbXA4w4YGnRoyY21aiv3bXAqTxQjy8TsSiXwdkT0lOHAASchcRl0oaoR3Fgk   u2Bcyi5/macn4gIof7lWGxgasR0Ct/xTY=    X-IronPort-AV: E=Sophos;i="5.91,300,1647302400";      d="png'150?scan'150,208,217,150";a="34252307"  

See:

DKIM: encountered the following problem validating rbsworkspace.onmicrosoft.com: bodyhash_mismatch

I have read various posts on here, Stack Overflow, and Microsoft (who interestingly concluded the issue was at the receivers end rather than them incorrectly setting up their email servers) and elsewhere, elsewhere.


so, my question is:

Can I do anything with Exim at the receiving end to permit Microsft Exchange caused DKIM bodyhash mismatches?

Otherwise Exim can never refuse DKIM failures....

qualifier: I assume the cause is MS Exchange as the outlook.com domain is present on ALL body hash mismatch emails in the rejectlogs, but I'm not familair with MicroSoft's architecture so I've mentally grouped all the above as "exchange server" issues, as that's my perception. I may be incorrect.

how to lint sieve scripts in a programmatic way

Posted: 28 Jun 2022 03:38 AM PDT

how can I lint sieve scripts in a programmatic way?

I am looking for something to integrate in a sieve editor

I have not found any reference in the sieve documentation

dnf: missing library, but does exist

Posted: 28 Jun 2022 05:20 AM PDT

Trying to install libmediainfo with dnf, got an error about missing library that does exist.

How to solve this ?

Here is dnfoutput:

$ sudo dnf install -v libmediainfo --allowerasing  DNF version: 4.7.0  cachedir: /var/cache/dnf  User-Agent: constructed: 'libdnf (Rocky Linux 8.6; generic; Linux.x86_64)'  repo: using cache for: appstream  repo: using cache for: baseos  repo: using cache for: extras  repo: using cache for: epel  repo: using cache for: epel-modular  --> Starting dependency resolution  --> Finished dependency resolution  Error:   Problem: conflicting requests    - nothing provides libtinyxml2.so.6()(64bit)        needed by libmediainfo-22.03-1.el8.x86_64  

The missing library is installed on the system:

$ ll /usr/lib64/libtiny*      21 Aug 16  2019 libtinyxml.so -> libtinyxml.so.0.2.6.2      21 Aug 16  2019 libtinyxml.so.0 -> libtinyxml.so.0.2.6.2  119272 Aug 16  2019 libtinyxml.so.0.2.6.2      20 May 23  2014 libtinyxml2.so.2 -> libtinyxml2.so.2.1.0   84816 May 23  2014 libtinyxml2.so.2.1.0      20 Jun 28 11:22 libtinyxml2.so.6 -> libtinyxml2.so.2.1.0  

Additional info

The libtinyxml2.so was installed by hand (libtinyxml2 not found by dnf) with

wget https://download-ib01.fedoraproject.org/pub/epel/7/x86_64/Packages/t/tinyxml2-2.1.0-2.20140406git6ee53e7.el7.x86_64.rpm  sudo rpm -Uvh tinyxml2-2.1.0-2.20140406git6ee53e7.el7.x86_64.rpm  sudo ldconfig  

Important

ldconfig doesn't see the libtinyxml2.so.6 . Could be a clue for a solution.

$ ldconfig -p | grep tinyx      libtinyxml2.so.2 (libc6,x86-64) => /lib64/libtinyxml2.so.2      libtinyxml.so.0 (libc6,x86-64) => /lib64/libtinyxml.so.0      libtinyxml.so (libc6,x86-64) => /lib64/libtinyxml.so  

Phpmyadmin Network Traffic is huge

Posted: 28 Jun 2022 02:39 AM PDT

I used PHP to update a table in my database. The code constructs a query using the case function.

SELECT OrderID, Quantity,  CASE      WHEN Quantity = 30 THEN "The quantity is 30"      WHEN Quantity = 31 THEN "The quantity is 31"      WHEN Quantity = 32 THEN "The quantity is 32"      ELSE 0  END  FROM OrderDetails;  

This led to the server CPU maxing out since it is a big table (around 2000 entries) I have stopped the MySQL database and tried restarting it many times but there are still many individual queries derived from the case query arriving on the database.

SELECT OrderID, Quantity WHERE Quantity = 30  

what should I do?

Console skipping while resetting Nextcloud password via OCC Command in a Docker container

Posted: 28 Jun 2022 02:51 AM PDT

Unfortunately I can't access my Nextcloud due to both my normal user and admin user being locked as their password expired. I've tried resetting the password via OCC, but I can't enter a new password. The console just skips ahead. I'd really appreciate if somebody could tell me what causes this or how to stop it.

enter image description here

Where to find subscription-manager-migration-data 1.24 for CentOS 7?

Posted: 28 Jun 2022 02:32 AM PDT

This is driving me crazy !!! I'm in the process of migrating around 20-ish CentOS 7 machines from Spacewalk to Katello. To accomplish this I want to use katello_bootstrap.py script to automate it. That script depends on subscription-manager and subscription-manager-migration which depends on subscription-manager-data.

I was able to find https://buildlogs.centos.org/c7.1908.u.x86_64 to use as repo, which contains subscription-manager-migration-1.24.13-4.el7.centos.x86_64, but not the -data package.

Please help me !!!!

Docker image is not running after copying custom nginx config

Posted: 28 Jun 2022 02:25 AM PDT

I have a DockerFile below

FROM node:latest AS builder  WORKDIR /app  COPY . .  RUN npm i && npm run prod_build    FROM nginx:alpine  COPY --from=builder /app/dist /usr/share/nginx/html  COPY ["nginx.conf", "/etc/nginx/conf.d/default.conf"]  EXPOSE 80    ENTRYPOINT ["nginx", "-g", "daemon off;"]  

Now the custom nginx config file nginx.conf is below

user                            root;  worker_processes                auto;    error_log                       /var/log/nginx/error.log warn;    events {      worker_connections          1024;  }    http {      listen       80;      server_name  10.168.8.59;        #charset koi8-r;      #access_log  /var/log/nginx/host.access.log  main;        location / {          root   /usr/share/nginx/html;          index  index.html index.htm;      }  }  

Docker build works fine but after running the docker run command if I run docker ps it shows nothing. Commands are below

docker build -t po .  docker run --rm -d -p 80:80 po    docker ps  CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES  

Can anyone please help me? Thanks.

Independent distributed FS for Linux with ACL

Posted: 28 Jun 2022 04:53 AM PDT

I have two Linux servers in two locations connected by VPN (over WAN - 100/100 Mb/s). On each of these servers is Samba DC (the same domain) and Samba File Server.

Samba FS are configured to store ACL in POSIX ACL (not in file as it's by default). On that Samba FS are hosted the user files (Desktop and Documents redirection) and profiles.

Users are migrating between these two locations.

What I'm trying to achieve is the real time synchronization of files between these two servers to make it possible for users to login into domain accounts on both locations and always have the same files.

But, this solution must be fault tolerant: if something happen with connection between these two sites users should still have an access to their files (the latest version stored on the side where user logged in) and if connection will be restored then these two servers should synchronize files modified/created/deleted on the both sides (to have the same state of FS). Of course the obvious question is: what if the same file will be modified on both sides when they are disconnected? First of all in my scenario it's rather not a case (low probability of that) but if it happens it's perfectly fine for me to choose the file version with the latest modification time.

I tried a GlusterFS but it has a one big disadvantage in my scenario: if I write a file to GlusterFS it's consider as saved only when it's saved in both chunks - this cause that any file write (about read I'm not sure) is limited by my WAN VPN connection speed - this is not what I want - ideally file is modified/saved directly on a local server and then asynchronously synced with the second side.

Currently I'm using just an Osync script - it basically meets almost all my conditions (at least it's the best solution I was able to achieve) - it preserves the POSIX ACL, automatically resolves the conflicts - just in one word: it does not need my daily basis attention. But this solution has a big disadvantage - it's not a daemon but just script running in infinite loop, so in every run it scans a whole disk (on both sides) to detect any changes and only then synchronize both servers - it's very disk intensive operation and unfortunately each run took about an hour! (the synced folders have about 1TB of data).

So to sum up, I am looking for a solution that works like the mentioned Osync script but is rather a daemon that listens to the changes on disk and if any change occurs then immediately synchronizes it to the second site.

E.g. sync anything looked very promising but it does not support a POSIX ACL sync (that is needed by Samba FS)

Could you propose a solution?

dkim key error with bind9 dns

Posted: 28 Jun 2022 02:03 AM PDT

I am having an issue while configuring dkim with postfix on ubuntu 20.04, as i get the following error:

opendkim-testkey -d example.com -s khloud -vvv  opendkim-testkey: using default configfile /etc/opendkim.conf  opendkim-testkey: key loaded from /etc/opendkim/keys/example.com/khloud.private  opendkim-testkey: checking key 'khloud._domainkey.example.com'  opendkim-testkey: 'khloud._domainkey.example.com' record not found  

when i use dig, i can get the TXT record:

;; ANSWER SECTION:  khloud._domainkey.example.com. 3600 IN TXT  "v=DKIM1; k=rsa; p=MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCovh+yxWZIxya57mm0hITvfGXH9RjW/MfeeKjrXkChqNrjlYAfqERBjUwL4VCmceUf/bbbkKskdNQUJq9mm54qAGi4MZS6v9TectyF7mvO1uw4GcjZjiCL8r8A1jX7znRjYl6Ew3+jecMLIoUci7m/LV7xPQcxMEMPO8sNK4dluQIDAQAB"  

i have the following dkim record on my dns zone file (i am using bind9 on ubuntu 20.04):

khloud._domainkey.example.com.    IN    TXT    "v=DKIM1; k=rsa; p=MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCovh+yxWZIxya57mm0hITvfGXH9RjW/MfeeKjrXkChqNrjlYAfqERBjUwL4VCmceUf/bbbkKskdNQUJq9mm54qAGi4MZS6v9TectyF7mvO1uw4GcjZjiCL8r8A1jX7znRjYl6Ew3+jecMLIoUci7m/LV7xPQcxMEMPO8sNK4dluQIDAQAB"  

and here is my config: /etc/opendkim.conf

# Log to syslog  Syslog          yes  # Required to use local socket with MTAs that access the socket as a non-  # privileged user (e.g. Postfix)  UMask           007    # Sign for example.com with key in /etc/dkimkeys/dkim.key using  # selector '2007' (e.g. 2007._domainkey.example.com)  Domain          example.com  KeyFile     /etc/opendkim/keys/example.com/khloud.private  Selector        khloud    # Commonly-used options; the commented-out versions show the defaults.  Canonicalization    relaxed/simple  Mode            sv  #SubDomains     no    AutoRestart         yes  AutoRestartRate     10/1M  Background          yes  DNSTimeout          5  SignatureAlgorithm  rsa-sha256    # Socket smtp://localhost  #  # ##  Socket socketspec  # ##  # ##  Names the socket where this filter should listen for milter connections  # ##  from the MTA.  Required.  Should be in one of these forms:  # ##  # ##  inet:port@address           to listen on a specific interface  # ##  inet:port                   to listen on all interfaces  # ##  local:/path/to/socket       to listen on a UNIX domain socket  #  #Socket                  inet:8892@localhost  Socket          local:/var/spool/postfix/opendkim/opendkim.sock    ##  PidFile filename  ###      default (none)  ###  ###  Name of the file where the filter should write its pid before beginning  ###  normal operations.  #  PidFile               /run/opendkim/opendkim.pid      # Always oversign From (sign using actual From and a null From to prevent  # malicious signatures header fields (From and/or others) between the signer  # and the verifier.  From is oversigned by default in the Debian pacakge  # because it is often the identity key used by reputation systems and thus  # somewhat security sensitive.  OversignHeaders     From    ##  ResolverConfiguration filename  ##      default (none)  ##  ##  Specifies a configuration file to be passed to the Unbound library that  ##  performs DNS queries applying the DNSSEC protocol.  See the Unbound  ##  documentation at http://unbound.net for the expected content of this file.  ##  The results of using this and the TrustAnchorFile setting at the same  ##  time are undefined.  ##  In Debian, /etc/unbound/unbound.conf is shipped as part of the Suggested  ##  unbound package    # ResolverConfiguration     /etc/unbound/unbound.conf    ##  TrustAnchorFile filename  ##      default (none)  ##  ## Specifies a file from which trust anchor data should be read when doing  ## DNS queries and applying the DNSSEC protocol.  See the Unbound documentation  ## at http://unbound.net for the expected format of this file.    TrustAnchorFile       /usr/share/dns/root.key    ##  Userid userid  ###      default (none)  ###  ###  Change to user "userid" before starting normal operation?  May include  ###  a group ID as well, separated from the userid by a colon.  #  UserID                opendkim    # Map domains in From addresses to keys used to sign messages  #KeyTable           refile:/etc/opendkim/key.table  #SigningTable       refile:/etc/opendkim/signing.table    # Hosts to ignore when verifying signatures  ExternalIgnoreList  /etc/opendkim/trusted.hosts    # A set of internal hosts whose mail should be signed  InternalHosts       /etc/opendkim/trusted.hosts  

/etc/default/opendkim

# Command-line options specified here will override the contents of  # /etc/opendkim.conf. See opendkim(8) for a complete list of options.  #DAEMON_OPTS=""  # Change to /var/spool/postfix/run/opendkim to use a Unix socket with  # postfix in a chroot:  #RUNDIR=/var/spool/postfix/run/opendkim  RUNDIR=/run/opendkim  #  # Uncomment to specify an alternate socket  # Note that setting this will override any Socket value in opendkim.conf  # default:  SOCKET="local:/var/spool/postfix/opendkim/opendkim.sock"  # listen on all interfaces on port 54321:  #SOCKET=inet:54321  # listen on loopback on port 12345:  #SOCKET=inet:12345@localhost  # listen on 192.0.2.1 on port 12345:  #SOCKET=inet:12345@192.0.2.1  USER=opendkim  GROUP=opendkim  PIDFILE=$RUNDIR/$NAME.pid  EXTRAAFTER=  

i added the following section on /etc/postfix/main.cf

# Milter configuration  milter_default_action = accept  milter_protocol = 6  smtpd_milters = local:opendkim/opendkim.sock  non_smtpd_milters = $smtpd_milters  

my key table file:

khloud._domainkey.example.com     example.com:khloud:/etc/opendkim/keys/example.com/khloud.private  

my signing table file:

*@example.com    khloud._domainkey.example.com  

Notes:

  1. The private key file is owned by opendkim user.
  2. The size of the key is 1024 bits.
  3. I am using a local DNS (BIND9), so there isn't a problem with propagating.

Hyper-V Windows 10 virtual machine continuously crashing with memory management blue screen

Posted: 28 Jun 2022 05:05 AM PDT

I am trying to create a virtual machine of a Windows 10 laptop using Sysinternals 'Disk2vhd utility.

First I created a vhdx file of the laptops C: drive using the Disk2vhd utility. Afterwards I copied the vhdx file to my Windows 10 desktop computer and created a virtual machine in Hyper-V-Manager. The virtual machine starts and I can see the login screen but after a few seconds I am always getting a memory management blue screen 0xC000000F.

Is there anything I could do to fix this problem? Thank you in advance for your help!

Update: I discovered that you can also attach a vhd or vhdx file in Windows 10 through the Computer Management (for more information refer to this article). I attached a fresh copy of the vhdx file i previously tried to boot in Hyper-V and everything works as expected and i can access the data stored in the vhdx file.

The Computer Management shows that the partitions: efi, windows ntfs and recovery are error free. I still don't know the reason for the continuously appearing blue screen in Hyper-V.

Update 2: I also learned that you can convert a vhd or vhdx file to a other file format like vmdk using VBoxManage from VirtualBox. I converted the vhdx file into a vmdk file and used it to create a virtual machine in Workstation Player. The virtual machine works flawless in Workstation Player. I still don't know the reason for constant blue screens in Hyper-V but at least i have a working virtual machine now. I hope this can help people with a similiar problem.

Article that explains how to attach a vhd or vhdx file in Windows 10 and how to convert it them into a vmdk file using VBoxManage

Windows Update crashes before opening

Posted: 28 Jun 2022 03:01 AM PDT

Windows Server 2016. When I try to open Windows Update (Setting - Update and security), application crashes. In Event Log, I found:

Faulting application name: SystemSettings.exe, version: 10.0.14393.82, time stamp: 0x57a55dc6  Faulting module name: Windows.UI.Xaml.dll, version: 10.0.14393.3750, time stamp: 0x5ed5df2d  Exception code: 0xc000027b  Fault offset: 0x00000000006d640b  Faulting process id: 0x2824  Faulting application start time: 0x01d84fef35ce1782  Faulting application path: C:\Windows\ImmersiveControlPanel\SystemSettings.exe  Faulting module path: C:\Windows\System32\Windows.UI.Xaml.dll  Report Id: 8fea4980-8d4d-4487-b8a3-15717fa5fd3d  Faulting package full name: windows.immersivecontrolpanel_6.2.0.0_neutral_neutral_cw5n1h2txyewy  Faulting package-relative application ID: microsoft.windows.immersivecontrolpanel  

So, I googled for 0xc000027b and found:

In elevated command prompt:  dism.exe /online /cleanup-image /scanhealth  dism.exe /online /cleanup-image /restorehealth  

For dism.exe /online /cleanup-image /scanhealth, I got:

Deployment Image Servicing and Management tool  Version: 10.0.14393.3750    Image Version: 10.0.14393.3241    [==========================100.0%==========================] The component store is repairable.  The operation completed successfully.  

...looks promising. For dism.exe /online /cleanup-image /restorehealth I got

Deployment Image Servicing and Management tool  Version: 10.0.14393.3750    Image Version: 10.0.14393.3241    [==========================100.0%==========================]  Error: 0x800f081f    The source files could not be found.  Use the "Source" option to specify the location of the files that are required to restore the feature. For more information on specifying a source location, see http://go.microsoft.com/fwlink/?LinkId=243077.    The DISM log file can be found at C:\Windows\Logs\DISM\dism.log    

I checked link at http://go.microsoft.com/fwlink/?LinkId=243077 and found "Windows Update is the default repair source, but you can configure Group policy to specify one or more network locations that have the files required to restore a Windows feature or repair a corrupted operating system."

And I'm stuck. Why it doesnt check Windows Update? How can I choose another source? Download ISO? Which one?

802.1x Wireless with certificates for AADJ/Intune devices without user affinity

Posted: 28 Jun 2022 02:11 AM PDT

I can setup certificate distribution and wireless profiles in Intune for devices with user affinity and this works fine. The user account is synchronised with our on site AD server and NPS has an account to use for permissions. However for devices purely in azure without user affinity there's no account for NPS to use for permissions. I could create these manually but is there a way to do this using microsoft applications, either by authenticating against Azure or getting the accounts created in AD?

Thanks

Exchange 2016 - Users mail keep being stuck in a Shared Mailbox‘s outbox until someone else uses the Same Shared Mailbox

Posted: 28 Jun 2022 03:46 AM PDT

I have come across a very unique problem that I have yet to find a solution for as even google failed me.

When User1 tries to send a mail as the shared Mailbox SharedMailbox@ourdomain.com the mail is being placed in the outbox and does not show up in any logs. However if User2 sends a mail as the shared Mailbox, User1's mail is also being send out of the outbox with no problem. At this point User1's mail also shows up in the logs.

What I have tried:

  • restart Exchange
  • create new mail for User1
  • try different outlook clients on multiple PCs
  • move User1's mailbox to a different database
  • assign User1's rights freshly (all right including send as)
  • copy the AD user and try it with the copy, which has the same problem as User1)

Seeing as User1Copy has the same problem as User1 I snooped through the attributes in AD and changed every attribute that was different from User2, but still it does not work.

Two days ago it randomly sent out the mail from the outbox without someone else sending a new mail (I checked the logs).

Does anyone have an idea what else I can try (besides making a new AD user for User1) or if I can access the outbox via the Exchange Server Shell?

ISC DHCP assign pool/subnet to certain MAC addresses

Posted: 28 Jun 2022 05:05 AM PDT

Hej, desperate here!

I'm running isc-dhcp-server 4.1 here (with webmin but that's another topic) and want to give some options (tftp-server and bootfile) to network-components that I can identify by MAC address and vendorstring.

However, I got to the point where the config matches the MAC address but still says it's an unknown client and won't proceed after DHCPDISCOVER. I do use deny unknown-clients but for a different shared-network and subnet.

What am I missing here and why do I only get this in the log?

dhcpd: found a board  dhcpd: DHCPDISCOVER from b8:27:eb:ab:cd:ef via eth0: unknown client  

This is the current config:

shared-network COMPUTERS {      subnet 10.0.106.0 netmask 255.255.254.0 {          option subnet-mask 255.255.254.0;          default-lease-time 3600;          authoritative;          ignore client-updates;          deny unknown-clients;          ddns-updates off;          pool {              range 10.0.106.170 10.0.106.200;              }          pool {              range 10.0.107.170 10.0.107.200;              }          }      }    class "board" {      match if substring (hardware, 1, 3) = b8:27:eb;      log(info, "found a board");      }    shared-network hardware {      # network for TFTP stuff      subnet 192.168.120.0 netmask 255.255.255.0 {          pool {                  allow unknown-clients;                  allow dynamic bootp clients;                  allow members of "board";                  next-server 192.168.120.254;                  filename "uboot.scr";                  range 192.168.120.10 192.168.120.50;                  log(info , "allocated to a board" );              }          }      }  

Since it's a CentOS 6 I am using eth0 and eth0:1 config files and will post ifconfig as well as ip add output:

> ifconfig:  eth0      Link encap:Ethernet  HWaddr 00:26:AB:12:34:56            inet addr:10.0.106.3  Bcast:10.0.107.255  Mask:255.255.254.0            UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1            RX packets:356698 errors:0 dropped:0 overruns:0 frame:0            TX packets:224426 errors:0 dropped:0 overruns:0 carrier:0            collisions:0 txqueuelen:1000            RX bytes:59600895 (56.8 MiB)  TX bytes:32866187 (31.3 MiB)            Interrupt:17    eth0:1    Link encap:Ethernet  HWaddr 00:26:AB:12:34:56            inet addr:192.168.120.254  Bcast:192.168.120.255  Mask:255.255.255.0            UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1            Interrupt:17      > ip add:  2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000      link/ether 00:26:ab:12:34:56 brd ff:ff:ff:ff:ff:ff      inet 10.0.106.3/23 brd 10.0.107.255 scope global eth0      inet 192.168.120.254/24 brd 192.168.120.255 scope global eth0:1          valid_lft forever preferred_lft forever  

Any help would be greatly appreciated.

Cheers

Postfix + opendkim + spamassin: messages are signed twice

Posted: 28 Jun 2022 03:00 AM PDT

I've got Postfix + spamassassin working already and I'm introducing opendkim.

I'm almost there except outgoing messages are DKIM signed twice, which is not only useless but even makes them DKIM invalid.

From what I gathered, the whole thing boils down to the fact that Postfix calls opendkim, then hands down the message to spamassassin, which in turns gives it back to postfix which calls opendkim again.

This said, I don't know what to change to the configuration to prevent this.

I don't think the DKIM config itself is wrong. I followed following tutorials:

/etc/opendkim.conf

KeyTable                refile:/etc/postfix/dkim/keytable  SigningTable            refile:/etc/postfix/dkim/signingtable  ExternalIgnoreList      refile:/etc/postfix/dkim/TrustedHosts  InternalHosts           refile:/etc/postfix/dkim/TrustedHosts  

/etc/postfix/dkim/TrustedHosts

127.0.0.1  ::1  localhost  xxx.xxx.xxx.xxx (server IP)  xxxx:xxxx:... (server IPv6)  domain.tld  *.domain.tld  

Postfix configuration is the following.

I added those lines to main.cf:

[...]  milter_default_action = accept  milter_protocol = 2  smtpd_milters = inet:localhost:8891, inet:localhost:8892  non_smtpd_milters = $smtpd_milters  

I left master.cf untouched:

# ==========================================================================  # service type  private unpriv  chroot  wakeup  maxproc command + args  #               (yes)   (yes)   (no)    (never) (100)  # ==========================================================================  smtp      inet  n       -       y       -       -       smtpd      -o content_filter=spamassassin  #smtp      inet  n       -       y       -       1       postscreen  #smtpd     pass  -       -       y       -       -       smtpd  #dnsblog   unix  -       -       y       -       0       dnsblog  #tlsproxy  unix  -       -       y       -       0       tlsproxy  #submission inet n       -       y       -       -       smtpd  #  -o syslog_name=postfix/submission  #  -o smtpd_tls_security_level=encrypt  #  -o smtpd_sasl_auth_enable=yes  #  -o smtpd_reject_unlisted_recipient=no  #  -o smtpd_client_restrictions=$mua_client_restrictions  #  -o smtpd_helo_restrictions=$mua_helo_restrictions  #  -o smtpd_sender_restrictions=$mua_sender_restrictions  #  -o smtpd_recipient_restrictions=  #  -o smtpd_relay_restrictions=permit_sasl_authenticated,reject  #  -o milter_macro_daemon_name=ORIGINATING  #smtps     inet  n       -       y       -       -       smtpd  #  -o syslog_name=postfix/smtps  #  -o smtpd_tls_wrappermode=yes  #  -o smtpd_sasl_auth_enable=yes  #  -o smtpd_reject_unlisted_recipient=no  #  -o smtpd_client_restrictions=$mua_client_restrictions  #  -o smtpd_helo_restrictions=$mua_helo_restrictions  #  -o smtpd_sender_restrictions=$mua_sender_restrictions  #  -o smtpd_recipient_restrictions=  #  -o smtpd_relay_restrictions=permit_sasl_authenticated,reject  #  -o milter_macro_daemon_name=ORIGINATING  #628       inet  n       -       y       -       -       qmqpd  pickup    unix  n       -       y       60      1       pickup  cleanup   unix  n       -       y       -       0       cleanup  qmgr      unix  n       -       n       300     1       qmgr  #qmgr     unix  n       -       n       300     1       oqmgr  tlsmgr    unix  -       -       y       1000?   1       tlsmgr  rewrite   unix  -       -       y       -       -       trivial-rewrite  bounce    unix  -       -       y       -       0       bounce  defer     unix  -       -       y       -       0       bounce  trace     unix  -       -       y       -       0       bounce  verify    unix  -       -       y       -       1       verify  flush     unix  n       -       y       1000?   0       flush  proxymap  unix  -       -       n       -       -       proxymap  proxywrite unix -       -       n       -       1       proxymap  smtp      unix  -       -       y       -       -       smtp  relay     unix  -       -       y       -       -       smtp  #       -o smtp_helo_timeout=5 -o smtp_connect_timeout=5  showq     unix  n       -       y       -       -       showq  error     unix  -       -       y       -       -       error  retry     unix  -       -       y       -       -       error  discard   unix  -       -       y       -       -       discard  local     unix  -       n       n       -       -       local  virtual   unix  -       n       n       -       -       virtual  lmtp      unix  -       -       y       -       -       lmtp  anvil     unix  -       -       y       -       1       anvil  scache    unix  -       -       y       -       1       scache  #  # ====================================================================  # Interfaces to non-Postfix software. Be sure to examine the manual  # pages of the non-Postfix software to find out what options it wants.  #  # Many of the following services use the Postfix pipe(8) delivery  # agent.  See the pipe(8) man page for information about ${recipient}  # and other message envelope options.  # ====================================================================  #  # maildrop. See the Postfix MAILDROP_README file for details.  # Also specify in main.cf: maildrop_destination_recipient_limit=1  #  maildrop  unix  -       n       n       -       -       pipe    flags=DRhu user=vmail argv=/usr/bin/maildrop -d ${recipient}  #  # ====================================================================  #  # Recent Cyrus versions can use the existing "lmtp" master.cf entry.  #  # Specify in cyrus.conf:  #   lmtp    cmd="lmtpd -a" listen="localhost:lmtp" proto=tcp4  #  # Specify in main.cf one or more of the following:  #  mailbox_transport = lmtp:inet:localhost  #  virtual_transport = lmtp:inet:localhost  #  # ====================================================================  #  # Cyrus 2.1.5 (Amos Gouaux)  # Also specify in main.cf: cyrus_destination_recipient_limit=1  #  #cyrus     unix  -       n       n       -       -       pipe  #  user=cyrus argv=/cyrus/bin/deliver -e -r ${sender} -m ${extension} ${user}  #  # ====================================================================  # Old example of delivery via Cyrus.  #  #old-cyrus unix  -       n       n       -       -       pipe  #  flags=R user=cyrus argv=/cyrus/bin/deliver -e -m ${extension} ${user}  #  # ====================================================================  #  # See the Postfix UUCP_README file for configuration details.  #  uucp      unix  -       n       n       -       -       pipe    flags=Fqhu user=uucp argv=uux -r -n -z -a$sender - $nexthop!rmail ($recipient)  #  # Other external delivery methods.  #  ifmail    unix  -       n       n       -       -       pipe    flags=F user=ftn argv=/usr/lib/ifmail/ifmail -r $nexthop ($recipient)  bsmtp     unix  -       n       n       -       -       pipe    flags=Fq. user=bsmtp argv=/usr/lib/bsmtp/bsmtp -t$nexthop -f$sender $recipient  scalemail-backend unix  -       n       n       -       2       pipe    flags=R user=scalemail argv=/usr/lib/scalemail/bin/scalemail-store ${nexthop} ${user} ${extension}  dovecot   unix  -       n       n       -       -       pipe    flags=DRhu user=vmail:vmail argv=/usr/lib/dovecot/deliver -f ${sender} -d ${recipient}  spamassassin unix -     n       n       -       -       pipe    user=debian-spamd argv=/usr/bin/spamc -f -e /usr/sbin/sendmail -oi -f ${sender} ${recipient}  

The advices I found about the "DKIM signs twice" issue explain why it happens but the resolution applies to configurations that differ from mine (e.g. on Ubuntu documentation). They suggest to add no_milters to a -o receive_override_options= line. But I don't have this line. I don't use amavis. And I can't figure out how to proceed as master.cf is a bit cryptic to me.

Questions:

  • Is the rootcause correctly identified (double signature due to spamassassin resending the message to the queue)?
  • How to skip milters when the messages come back to the queue?
  • Would it be better to DKIM sign after spamassassin rather than before? If so, what would be the configuration? (I suppose removing smtpd_milters from main.cf and adding -o smtpd_milters=... in place of receive_override_options=no_milters but it seems uncommon, so I'd say this doesn't matter.)

SSH cannot load host key

Posted: 28 Jun 2022 05:05 AM PDT

I have a Debian 8 server but since a few days ago, my server provider is showing me that SSh is disabled, but it still pings, and HTTP / HTTPS are enabled. I have to add that I'm a newbie in this server world, I know the basics but not much else

The last thing I remember touching was the sshd_config file, but I put back everything as it was first (Double checked with another server of mine that I didn't touch)

So, when I put my server in rescue mode and I do

grep 'sshd' /var/log/auth.log  

I get the following:

Aug 17 12:23:44 vpsxxxxxx sshd[7974]: pam_unix(sshd:session): session opened for user root by (uid=0)  Aug 17 12:23:44 vpsxxxxxx sshd[7974]: Received disconnect from yy.yy.yy.yy: 11: disconnected by user  Aug 17 12:23:44 vpsxxxxxx sshd[7974]: pam_unix(sshd:session): session closed for user root  Aug 17 12:26:28 vpsxxxxxx sshd[7979]: error: Could not load host key: /etc/ssh/ssh_host_ed25519_key  Aug 17 12:26:28 vpsxxxxxx sshd[7979]: Connection closed by xx.xx.xx.xx [preauth]  Aug 17 12:26:52 vpsxxxxxx sshd[7981]: error: Could not load host key: /etc/ssh/ssh_host_ed25519_key  Aug 17 12:26:58 vpsxxxxxx sshd[7981]: Accepted password for root from xx.xx.xx.xx port 65429 ssh2  Aug 17 12:26:58 vpsxxxxxx sshd[7981]: pam_unix(sshd:session): session opened for user root by (uid=0)  

I've seen maybe more than 2 solutions that I tried: running ssh-keygen -A, purgin openssh-server and reinstalling it, but nothing works: the same error about the same key that fails loading keeps showing up, and my connection is still getting refused when I try to log.

What else can I try?

Edit

Here is the output of the ssh -v root@vpsxxxxxx:

OpenSSH_6.6.1, OpenSSL 1.0.1e-fips 11 Feb 2013  debug1: Reading configuration data /etc/ssh/ssh_config  debug1: /etc/ssh/ssh_config line 56: Applying options for *  debug1: Connecting to vps308451.ovh.net [137.74.194.226] port 22.  debug1: connect to address 137.74.194.226 port 22: Connection refused  ssh: connect to host vps308451.ovh.net port 22: Connection refused  

Please don't tell me it's a syntax / human error...

Edit2

Totally forgot about the other 2 questions, sorry about that

I do have the HostKey /etc/ssh/ssh_host_ed25519_key line in my sshd_config file, and ls -la of the /etc/ssh/ directory outputs the following:

drwxr-xr-x  2 root root   4096 Aug 17 13:44 .  drwxr-xr-x 88 root root   4096 Aug 17 13:28 ..  -rw-r--r--  1 root root 242091 Jul 22  2016 moduli  -rw-r--r--  1 root root   1704 Aug 17 13:27 ssh_config  -rw-r--r--  1 root root   2528 Jun 20 08:29 sshd_config  -rw-------  1 root root    672 Aug 17 13:27 ssh_host_dsa_key  -rw-r--r--  1 root root    604 Aug 17 13:27 ssh_host_dsa_key.pub  -rw-------  1 root root    227 Aug 17 13:27 ssh_host_ecdsa_key  -rw-r--r--  1 root root    176 Aug 17 13:27 ssh_host_ecdsa_key.pub  -rw-------  1 root root    411 Aug 17 13:44 ssh_host_ed25519_key  -rw-r--r--  1 root root     97 Aug 17 13:44 ssh_host_ed25519_key.pub  -rw-------  1 root root    980 Aug 17 13:44 ssh_host_key  -rw-r--r--  1 root root    645 Aug 17 13:44 ssh_host_key.pub  -rw-------  1 root root   1675 Aug 17 13:27 ssh_host_rsa_key  -rw-r--r--  1 root root    396 Aug 17 13:27 ssh_host_rsa_key.pub  

I have to report something however, when I went to the folder to do a 'ls -la', the ssh_host_ed25519_key file wasn't there (I recreated it with keygen-ssh -A but I already tried this just a few hours ago and it didn't work)

As for the permissions, I have no clue wether if the output is good or bad...

Edit2

root@rescue-pro:/etc/ssh# ps -auwx | grep ssh  root       868  0.0  0.2  55184  5460 ?        Ss   13:27   0:00 /usr/sbin/sshd -D  root      7982  0.0  0.2  82680  5860 ?        Ss   13:33   0:00 sshd: root@pts/0      root      8009  0.0  0.1  13208  2160 pts/0    S+   13:55   0:00 grep ssh  

Edit 3

root@rescue-pro:/home# netstat -lntp | grep sshd  tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      868/sshd          tcp6       0      0 :::22                   :::*                    LISTEN      868/sshd    

Edit 4 : sshd_config file

# Package generated configuration file  # See the sshd_config(5) manpage for details    # What ports, IPs and protocols we listen for  Port 22  # Use these options to restrict which interfaces/protocols sshd will bind to  #ListenAddress ::  #ListenAddress 0.0.0.0  Protocol 2  # HostKeys for protocol version 2  HostKey /etc/ssh/ssh_host_rsa_key  HostKey /etc/ssh/ssh_host_dsa_key  HostKey /etc/ssh/ssh_host_ecdsa_key  HostKey /etc/ssh/ssh_host_ed25519_key  #Privilege Separation is turned on for security  UsePrivilegeSeparation yes    # Lifetime and size of ephemeral version 1 server key  KeyRegenerationInterval 3600  ServerKeyBits 1024    # Logging  SyslogFacility AUTH  LogLevel INFO    # Authentication:  LoginGraceTime 120  PermitRootLogin yes  StrictModes yes    RSAAuthentication yes  PubkeyAuthentication yes  #AuthorizedKeysFile     %h/.ssh/authorized_keys    # Don't read the user's ~/.rhosts and ~/.shosts files  IgnoreRhosts yes  # For this to work you will also need host keys in /etc/ssh_known_hosts  RhostsRSAAuthentication no  # similar for protocol version 2  HostbasedAuthentication no  # Uncomment if you don't trust ~/.ssh/known_hosts for RhostsRSAAuthentication  #IgnoreUserKnownHosts yes    # To enable empty passwords, change to yes (NOT RECOMMENDED)  PermitEmptyPasswords no    # Change to yes to enable challenge-response passwords (beware issues with  # some PAM modules and threads)  ChallengeResponseAuthentication no    # Change to no to disable tunnelled clear text passwords  #PasswordAuthentication yes    # Kerberos options  #KerberosAuthentication no  #KerberosGetAFSToken no  #KerberosOrLocalPasswd yes  #KerberosTicketCleanup yes      # GSSAPI options  #GSSAPIAuthentication no  #GSSAPICleanupCredentials yes    X11Forwarding yes  X11DisplayOffset 10  PrintMotd no  PrintLastLog yes  TCPKeepAlive yes  #UseLogin no    #MaxStartups 10:30:60  #Banner /etc/issue.net    # Allow client to pass locale environment variables  AcceptEnv LANG LC_*    Subsystem sftp /usr/lib/openssh/sftp-server    # Set this to 'yes' to enable PAM authentication, account processing,  # and session processing. If this is enabled, PAM authentication will  # be allowed through the ChallengeResponseAuthentication and  # PasswordAuthentication.  Depending on your PAM configuration,  # PAM authentication via ChallengeResponseAuthentication may bypass  # the setting of "PermitRootLogin without-password".  # If you just want the PAM account and session checks to run without  # PAM authentication, then enable this but set PasswordAuthentication  # and ChallengeResponseAuthentication to 'no'.  UsePAM yes  

You guys are my last hope before formatting/reinstalling all

Remote desktop 'internal error' attepting connect after 'NSA' patches installed

Posted: 28 Jun 2022 03:14 AM PDT

I just installed the latest Windows update (NSA vulnerability patch tuesday) and now I cannot connect to remote desktop.

  • The server is remotely hosted. I don't have physical access. Server 2012 R1.
  • Fortunately all web sites are running ok after reboot.
  • I have not yet tried a second reboot because I'm a little scared to.
  • When I try to connect I immediately get this message:
  • "Remote Desktop Connection : An internal error has occured"

enter image description here

  • Have tried from multiple clients. They all fail - including an iOS app which in addition gives me a 0x00000904 error.
  • If I run telnet servername 3389 then it does initiate a connection, so I know the port is open.
  • I can connect just fine to other servers from my Win 10 (unpatched) machine.
  • I cannot connect from my second laptop either, which is Win 10 Creators edition.
  • Cannot find anything useful in Event Viewer.
  • I've even tried wireshark which didn't show me anything useful.
  • The best I have to diagnose is the ability to upload an ASPX page and run it.

I understand that the recent 'NSA edition' patch roundup had some RDP fixes - but I can't find anyone else who suddenly had issues hit week.

I want to have an idea what the problem is before I contact the hosting company, which is why I'm posting here.


Update:

While I still don't have physical server access I remembered I have a Windows 7 VM hosted on the server itself. I was able to get into this and open the server certificates snap-in by connecting to the 10.0.0.1 local IP.

This is showing that the RDP certificate is indeed expired - although I get no errors when connecting that suggest as such. I certainly have been connecting daily and since it expired 2 months ago my guess is that some kind of security update removed whatever other certificate was in the Remote Desktop store and it did not renew itself.

So trying to figure out a way to install a different cert here now.

Update 2

Finally found this in the event log under 'Administrative Events' (by connecting remotely via the VM) :

"The Terminal Server has failed to create a new self signed certificate to be used for Terminal Server authentication on SSL connections. The relevant status code was Object already exists."

This seems helpful, albeit a slightly different error. Can't reboot tonight though so will have to check again tomorrow.

https://blogs.technet.microsoft.com/the_9z_by_chris_davis/2014/02/20/event-id-1057-the-terminal-server-has-failed-to-create-a-new-self-signed-certificate/

enter image description here

Deployment template validation failed while deploying machine in azure

Posted: 28 Jun 2022 02:07 AM PDT

Facing following issues while performing vagrant up with azure provider

"response": {      "body": "{\"error\":{\"code\":\"InvalidTemplate\",\"message\":\"Deployment template validation failed: 'The value fo  r the template parameter 'adminPassword' at line '1' and column '306' is not provided. Please see https://aka.ms/arm-dep  loy/#parameter-file for usage details.'.\"}}",  

Template file:

{    "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",    "contentVersion": "1.0.0.0",    "parameters": {      "vaults_VaultVagrant_name": {        "defaultValue": "VaultVagrant",        "type": "String"      },      "AdminPassword":{      "type":"securestring"      }      },    "variables": {},    "resources": [      {        "type": "Microsoft.KeyVault/vaults",        "name": "[parameters('vaults_VaultVagrant_name')]",        "apiVersion": "2015-06-01",        "location": "eastus",        "tags": {},        "properties": {          "sku": {            "family": "A",            "name": "standard"          },          "tenantId": "yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy",          "accessPolicies": [            {              "tenantId": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",              "objectId": "1efb1891-8ad4-4f69-9e0d-f5849e6f8c98",              "permissions": {                "keys": [                  "get",                  "create",                  "delete",                  "list",                  "update",                  "import",                  "backup",                  "restore"                ],                "secrets": [                  "all"                ]              }            }          ],          "enabledForDeployment": true        },        "resources": [],        "dependsOn": []      }      ]  }  

Parameter

{    "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",      "contentVersion": "1.0.0.0",      "parameters": {          "AdminPassword": {              "reference": {                 "keyVault": {                    "id":"/subscriptions/xxxxxxxxxxxxxxxxxxxxxxxxx/resourceGroups/azurevag/providers/Microsoft.KeyVault/vaults/VaultVagrant"                 },                  "secretName": "vagrant"               }             }     }  }  

I am deploying both files from local machine as like in below

azure group deployment create -f "c:\MyTemplates\example.json" -e "c:\MyTemplates\example.params.json" -g examplegroup -n exampledeployment

ISSUE:after deployment is successfully created checked the deployment script in azure portal where both the files look like below

TemplateFile

{    "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",    "contentVersion": "1.0.0.0",    "parameters": {      "vaults_VaultVagrant_name": {        "defaultValue": "VaultVagrant",        "type": "String"      }    },    "variables": {},    "resources": [      {        "comments": "Generalized from resource: '/subscriptions/xxxxxxxxxxxxxxxxxxxx/resourceGroups/azurevag/providers/Microsoft.KeyVault/vaults/VaultVagrant'.",        "type": "Microsoft.KeyVault/vaults",        "name": "[parameters('vaults_VaultVagrant_name')]",        "apiVersion": "2015-06-01",        "location": "eastus",        "tags": {},        "properties": {          "sku": {            "family": "A",            "name": "standard"          },          "tenantId": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",          "accessPolicies": [            {              "tenantId": "yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy",              "objectId": "1efb1891-8ad4-4f69-9e0d-f5849e6f8c98",              "permissions": {                "keys": [                  "get",                  "create",                  "delete",                  "list",                  "update",                  "import",                  "backup",                  "restore"                ],                "secrets": [                  "all"                ]              }            }          ],          "enabledForDeployment": true        },        "resources": [],        "dependsOn": []      }    ]  }  

**Note:**Parameter adminpassword is missing after deployment

Parameter file:

parameter file is empty. How the values(admin password) are missed after deployment?

Proper way to override Mysql my.cnf on CentOS/RHEL?

Posted: 28 Jun 2022 04:00 AM PDT

Context: I'm porting an opensource server software (and writing associated documentation) from Debian/Ubuntu to CentOS/RHEL.

For the software to run correctly, I need to add a dozen of specific parameters to Mysql configuration (example: increase max_allowed_packet).

From a Debian point of view, I known I can override Mysql's my.cnf by adding a file to /etc/mysql.d, say /etc/mysql.d/my-software.cnf.

My question is: how to do the same correctly on CentOS/RHEL ?

Other infos:

  • I know where mysqld looks for its configuration file thanks to https://dev.mysql.com/doc/refman/5.7/en/option-files.html. But, for CentOS, I don't understand:
    • how NOT to directly edit /etc/my.cnf (that may not be package-update-proof)
    • where to add my specific Mysql parameters
  • Reading the CentOS Mysql init script (/etc/init.d/mysql), I've seen that a /etc/sysconfig/mysqld is sourced, but I don't know how to add configuration parameters.
  • I've search for combinations of override / my.cnf / centos on ServerFault, StackOverflow and also DBA.StackExchange, but found nothing relevant.
  • I make all the tests within a "centos:6" Docker container
  • the software is Asqatasun https://github.com/Asqatasun/Asqatasun

How to secure AD administration with MFA

Posted: 28 Jun 2022 04:00 AM PDT

I'm evaluating an MFA solution such as Duo or Okta (any one have an opinion on that?). Adding MFA to web logins is straight forward but I want to add another layer of security to our Active Directory administration. Do any solutions work with powershell connections? (e.g. msol-connect Azure AD, or local stuff) With Duo for example I can put MFA on a secure jumpbox and that would add MFA for actions performed on that system. I assume I need to add restrictions where only AD admin actions can be taken on our MFA enabled jumpboxes (which we currently don't enforce).

Advice for MFA+standard server access in general? Should we layer MFA at the workstation login? I don't want to put an agent on every server or IT workstation in the domain if I can help it. Wondering what others do to tackle these issues. Seems MFA wouldn't stop most hackers as they can just go around the MFA secured RDP, would need to relay on other controls.

Thank you for any thoughts

Restricting LDAP users from authenticating without a valid shell

Posted: 28 Jun 2022 03:02 AM PDT

What I'm trying to accomplish is if I changed the user's attributes in LDAP to have no shell; for example: /etc/noshell, I do not want the user to ssh successfully. I have changed the attribute of one user:

# check62, people, wh.local  dn: uid=check62,ou=people,dc=wh,dc=local  uid: check62  cn: Johnny Appleseed  objectClass: account  objectClass: posixAccount  objectClass: top  objectClass: shadowAccount  userPassword:: e1NTSEF9dklpL2pQcWtNWDBFSUs1eUVDMUMxL2FjWHdJNGRuUXY=  shadowLastChange: 15140  shadowMax: 99999  shadowWarning: 7  uidNumber: 6002  gidNumber: 6002  homeDirectory: /home/check62  loginShell: /bin/noshell    # check62, group, wh.local  dn: cn=check62,ou=group,dc=wh,dc=local  objectClass: posixGroup  objectClass: top  cn: check62  gidNumber: 6002  userPassword:: e0NSWVBUfXg=  

Here is my /etc/pam.d/sshd file

#%PAM-1.0  auth       required     pam_sepermit.so  auth       include      password-auth  account    required     pam_nologin.so  account    include      password-auth  password   include      password-auth  # pam_selinux.so close should be the first session rule  session    required     pam_selinux.so close  session    required     pam_loginuid.so  # pam_selinux.so open should only be followed by sessions to be executed in the user        context  session    required     pam_selinux.so open env_params  session    optional     pam_keyinit.so force revoke  session    include      password-auth  

The user authenticates successfully with /bin/bash. A show stopper. Any help will be greatly appreciated.

Here's my password-auth file

#%PAM-1.0  # This file is auto-generated.  # User changes will be destroyed the next time authconfig is run.  auth        required      pam_env.so  auth        sufficient    pam_unix.so nullok try_first_pass  auth        requisite     pam_succeed_if.so uid >= 500 quiet  auth        sufficient    pam_ldap.so use_first_pass  auth        required      pam_deny.so    account     required      pam_unix.so broken_shadow  account     sufficient    pam_localuser.so  account     sufficient    pam_succeed_if.so uid < 500 quiet  account     [default=bad success=ok user_unknown=ignore] pam_ldap.so  account     sufficient    pam_ldap.so  account     required      pam_permit.so    password    requisite     pam_cracklib.so try_first_pass retry=3 type=  password    sufficient    pam_unix.so sha512 shadow nullok try_first_pass use_authtok  password    sufficient    pam_ldap.so use_authtok  password    required      pam_deny.so    session     optional      pam_keyinit.so revoke  session     required      pam_limits.so  session     [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid  session     required      pam_unix.so  session     optional      pam_ldap.so  

How do I find the date/time set on an EMC Symmetrix?

Posted: 28 Jun 2022 03:02 AM PDT

I'd like to find out what date and time (and possibly time zone) a Symmetrix (DMX-4 and VMAX) is set to. I've checked Powerlink as well as the man pages for symcli, but nothing pops out.

mod_proxy failing as forward proxy in simple configuration

Posted: 28 Jun 2022 02:07 AM PDT

(On Mac OS X 10.6, Apache 2.2.11)

Following the oft-repeated googled advice, I've set up mod_proxy on my Mac to act as a forward proxy for http requests. My httpd.conf contains this:

  <IfModule mod_proxy>  ProxyRequests On  ProxyVia On  <Proxy *>    Allow from all  </Proxy>    

(Yes, I realize that's not ideal, but I'm behind a firewall trying to figure out why the thing doesn't work at all)

So, when I point my browser's proxy settings to the local server (ip_address:80), here's what happens:

  1. I browse to http://www.cnn.com
  2. I see via sniffer that this is sent to Apache on the Mac
  3. Apache responds with its default home page ("It works!" is all this page says)

So... Apache is not doing as expected -- it is not forwarding my browser's request out onto the Internet to cnn. Nothing in the logfile indicates an error or problem, and Apache returns a 200 header to the browser.

Clearly there is some very basic configuration step I'm not understanding... but what?

No comments:

Post a Comment