Sunday, November 21, 2021

Recent Questions - Server Fault

Recent Questions - Server Fault


Can Serial I/O from a server be shared on an ethernet network switch?

Posted: 21 Nov 2021 03:31 PM PST

I have a network switch, a Linux server, and my computer. The computer and server are both connected to the switch. If I connect the serial port of the server to the switch through a Serial to Ethernet adapter, will my computer be able to access that Serial Console, or will I have to connect the server directly to the computer?

NOTE: the switch is a smart managed Ethernet switch, not a Serial COM switch.

I am not trying to connect to the Serial of the switch. I want to connect to the Serial of the server through the switch.

Unmark my domian as spammer

Posted: 21 Nov 2021 03:08 PM PST

Recently the email account of one of my users was hacked, the attacker did use his account to send a lot of spam and fishing mails. Once I realized the problem I did solve it, but it was a bit late so my entire domain is marked as spammer by many servers.

It's there a way to unlist my domain?

Thanks a lot.

postfix not listening on port 25, netstat shows nothing on port 25

Posted: 21 Nov 2021 03:04 PM PST

The output of netstat shows nothing on port 25 I understand dovecot >= 2.3.0 uses submissions protocol I don't know if it's relevant to the postfix smtpd daemon

How to add server certificate exception to Chrome/Edge?

Posted: 21 Nov 2021 03:26 PM PST

Is it possible to add server certificate exceptions for some websites (to skip warning page about certificates that are expired, self-signed or with missing or mismatched CN/SANs) in Google Chrome / MS Edge for all users (in any scriptable way, but preferably using policies/registry)?

In Mozilla Firefox I am using Autoconfig which is good enough without policy to use. Is there an alternative to Autoconfig in Chrome/Edge?

NPS Radius Configuration EAP-Ms-Chapv2

Posted: 21 Nov 2021 12:20 PM PST

I'm tryng to fix my Microsoft Server 2016 Network Policy Server configuration as radius server, with PEAP-MSChapv2.

As well known some modern devices are not able to "not validate" server certificate because this is option is too weak and had been disabled (for example some android 11 devices)

For what I know there should be the solution to add a internal CA certificate, to these (non domain) devices so that they can authenticate the nps server certificate (and avoiding manage client certificates).

I've found the nps server certificate issued by, a Internal CA and the certificate of this internal ca is self signed (issued by itself). Ive tried to export the ca cert (without private key) , and import it in in the devices, but for now, whitout success I've received error 22 : Eap type cannot be processed by server or error 265: the certificate chain was issued by an auuthority not trusted

Not clear if I've obtained 265 only when I've changed the field domain, on the client, to only the domain of the FQDN in cn name of the nps server certificate.

How can I implement correctly this (PEAP-MSchapv2 with server authentication on non domain client)?

Note: Now It works fine, for "old" wireless clients: They correctly athenticates as AD users, and gain network access, so I desire to correct the settings only for these newer devices not changing radically it.

Postfix:mail-to-script messages contain extra id and from lines

Posted: 21 Nov 2021 12:11 PM PST

I have Postfix (3.6.3) forward mail for a user to a script

maild: "| /usr/local/sbin/mailsave"  

The messages have extra From and id lines which break Python's email.Parser

Can you prevent Postfix from adding these lines?

From weberjn@host.my.tld  Sun Nov 21 19:11:19 2021  Return-Path: <weberjn@host.my.tld>  X-Original-To: maild@my.virtual  Delivered-To: maild@host.my.tld  Received: by host.my.tld (Postfix, from userid 1001)  id D3DFD783; Sun, 21 Nov 2021 19:11:19 +0100 (CET)  To: maild@my.virtual  Subject: s1  Message-Id: <20211121181119.D3DFD783@host.my.tld>  Date: Sun, 21 Nov 2021 19:11:19 +0100 (CET)  From: Jürgen Weber <weberjn@host.my.tld>    body    

Network vlan setup trunk

Posted: 21 Nov 2021 11:56 AM PST

I'm in need of some help. As it come to networking i'm a rather noob as a software engineer.

In my homelab i like to tighten things up. I have installed opnsense and would like to split up my vm's into multiple vlans.

vlan0 for basic stuff, a dev vlan, prod vlan, gaming vlan, and last but not least a logging vlan.

I already have setup vswitches and port groups in vmware esxi but i'm stuck on the logging vlan.

My initial plan was to give every vm a second vnic inside the logging vlan to send every logging/monitoring to grafan or nagios and stuff. However today I came to the conclusion that if every vm is inside a logging vlan all vm's can still interact.

I don't have any knowledge about vlan trunking, but what is a good option or best practice.

What I would like : some vms are in the specified vlans and every vm needs to be monitored and logs send to grafana?

I can setup routes between each vlan to the logging vlan is that an option?

Does a server need a GPU?

Posted: 21 Nov 2021 11:38 AM PST

Do I need a GPU on a text and console only server? No GPU as in no iGPU and dGPU. Im going to be using SSH, so I dont need a display out.

Im using Linux, but the OS shouldn't affect the results

Vmware Esxi - Old 32bit software performance issue on multi core

Posted: 21 Nov 2021 11:22 AM PST

I've been going crazy for 2 days now and I'm asking for help.

I have a program developed in delphi (early 2000s about) that accesses a firebird v3 database, currently installed on the same machine (windows server 2016 x64 - db and program are x86).

The machine is a VM configured on vmware esxi, I come to the point: if I configure the VM with only 1 vCPU the program runs very well, if I also configure it with 2 vCPUs (1 socket and 2 cores) or more, the performance is halved.

The problem is that by leaving only one vCPU, the cpu is perpetually at 100% even just for windows server jobs (eg search for updates and other stuff).

Do you have any advice?

PS: I can't switch to firebird x64 because many libraries are x86.

ZFS performance: Extreme low write speed

Posted: 21 Nov 2021 01:27 PM PST

I am running a small home server. The specs are:

  • CPU: AMD Ryzen 5 2600
  • RAM: 32 GB ECC
  • System drive: 128GB NVMe SSD
  • Data drives: 3x 4 TB Seagate Barracuda HDD

The server runs some applications like Nextcloud or Gitea and I want to run 1-2 VMs on it. So there are some web applications, databases and VMs.

The applications and qcow2 images are stored on a raidz1 pool:

$ sudo zpool status    pool: tank   state: ONLINE  config:            NAME        STATE     READ WRITE CKSUM          tank        ONLINE       0     0     0            raidz1-0  ONLINE       0     0     0              sdb     ONLINE       0     0     0              sdc     ONLINE       0     0     0              sdd     ONLINE       0     0     0    errors: No known data errors  

When I used the applications in the first weeks, I experienced no problems. But since a few weeks I realized extremly low write speeds. The nextcloud instance is not very fast and when I try to start a fresh VM with Windows 10 it needs about 5 Minutes to get to the login screen.

I did some performance testing using fio and got following results:

Test IOPS Bandwith (KiB/s)
random read 37,800 148,000
random write 31 127
sequential read 72,100 282,000
sequential write 33 134

I did some research before posting here and read that I should add a SLOG to the zfs pool for better performance with databases and VMs. But that's no option at the moment. I need to get christmas gifts first :D

But even without a SLOG I don't think these figures are correct :(

Does anyone have an idea? :)

Joining two tables having uncommon fields

Posted: 21 Nov 2021 01:55 PM PST

I am having two tables with no field in common

+-----+  | Mon |  +-----+  | Jan |  | Feb |  | Mar |  +-----+    +-------+  | Ccode |  +-------+  | A     |  | B     |  | C     |  +-------+  

Desired Output as follows

+-----+-------+  | Mon | Ccode |  +-----+-------+  | Jan | A     |  | Jan | B     |  | Jan | C     |  | Feb | A     |  | Feb | B     |  | Feb | C     |  | Mar | A     |  | Mar | B     |  | Mar | C     |  +-----+-------+  
CREATE TABLE Month(     Mon VARCHAR(3) NOT NULL PRIMARY KEY  );  INSERT INTO Month(Mon) VALUES ('Jan');  INSERT INTO Month(Mon) VALUES ('Feb');  INSERT INTO Month(Mon) VALUES ('Mar');      CREATE TABLE CustCode(     Ccode VARCHAR(1) NOT NULL PRIMARY KEY  );  INSERT INTO CustCode(Ccode) VALUES ('A');  INSERT INTO CustCode(Ccode) VALUES ('B');  INSERT INTO CustCode(Ccode) VALUES ('C');  

Can anybody help me out to achieve the above objective?

Thanks MP

Directory-Service-SAM Error

Posted: 21 Nov 2021 02:22 PM PST

In the Windows System event log, there are errors from Directory-Services-SAM. It is saying "The request for a new account-identifier pool failed. The operation will be retried until the request succeeds. The error is - The role owner attribute could not be read" enter image description here

And how do I locate the UserID belongs to which user / device? enter image description here

Confused about how to manipulate GCS bucket/object permissions

Posted: 21 Nov 2021 01:56 PM PST

On my laptop I have a directory which contains a subdirectory, which in turn contains a bunch of HTML files. It looks like this:

% ls -lR 2000-09  2000-09:  total 12  drwxrwxr-x 2 skip skip 12288 Nov 18 07:42 html    2000-09/html:  total 648  -rw-r--r-- 1 skip skip 18489 Dec  4  2019 index.html  -rw-r--r-- 1 skip skip 18489 Dec  4  2019 maillist.html  -rw-r--r-- 1 skip skip  3468 Dec  4  2019 msg00000.html  -rw-r--r-- 1 skip skip  3270 Dec  4  2019 msg00001.html  -rw-r--r-- 1 skip skip  3194 Dec  4  2019 msg00002.html  ...  

I use gsutil to upload that directory to my bucket:

% gsutil -m cp -r 2000-09 gs://my-secret-bucket/  Copying file://2000-09/html/msg00070.html [Content-Type=text/html]...  Copying file://2000-09/html/msg00060.html [Content-Type=text/html]...             Copying file://2000-09/html/msg00029.html [Content-Type=text/html]...             Copying file://2000-09/html/msg00052.html [Content-Type=text/html]...             ...  

It looks fine through the console. I see 2000-09, inside that html, and inside that a bunch of files. So far, so good.

Now I open up a cloud shell and mount that bucket using gcsfuse:

gcsfuse my-secret-bucket ~/mnt  

but the resulting directory view appears empty:

me@cloudshell:~ (whatever)$ ls -l ~/mnt  total 0  me@cloudshell:~ (whatever)$  

Then I upload a couple files directly into my bucket (at the top level)

% gsutil -m cp wrench?.jpg gs://my-secret-bucket/  Copying file://wrench1.jpg [Content-Type=image/jpeg]...  Copying file://wrench2.jpg [Content-Type=image/jpeg]...                           / [2/2 files][  1.0 MiB/  1.0 MiB] 100% Done                                      Operation completed over 2 objects/1.0 MiB.  

I confirm that they are there in the console, then list my mounted bucket again. They are visible and I can read them:

$ ls -l ~/mnt  total 1049  -rw-r--r-- 1 me me 432451 Nov 18 19:09 wrench1.jpg  -rw-r--r-- 1 me me 640526 Nov 18 19:09 wrench2.jpg  me@cloudshell:~ (whatever) cksum mnt/wrench1.jpg  3659533210 432451 mnt/wrench1.jpg  me@cloudshell:~ (whatever)$  

It seems that files are visible at the top level, but I can't figure out how to make the directory and its contents visible. The web interface is extremely confusing for an old Unix guy like me who wants to see stuff like rw-r--r-- when viewing long listings and execute chmod 0644 ... to make it so when it doesn't look right. How do I make my 2000-09 object/folder/directory and (recursively) its entire contents visible? Ultimately, I want to have it visible to a GCP-hosted Flask web app (also owned by me, so not necessarily publicly visible).

Bad performance on multiple loop devices used as file containers

Posted: 21 Nov 2021 12:49 PM PST

Currently, I'm managing a back-up service for multiple remote servers. Backups are written trough rsync, every back-up has it's own filecontainer mounted as a loop device. The main back-up partition is an 8T xfs formatted and the loop devices are between 100G and 600G, either ext2 or ext4 formatted. So, this is the Matryoshka-like solution simplified:

df -Th  > /dev/vdb1    xfs   8,0T   /mnt/backups  > /dev/loop1   ext2  100G   /mnt/srv1  > /dev/loop2   ext2  200G   /mnt/srv2    mount  > /dev/vdb1 on /mnt/backups  > /mnt/backups/srv1.ext2 on /mnt/srv1  > /mnt/backups/srv2.ext2 on /mnt/srv2    ls -R /mnt/backups  > /mnt/backups  > └─/mnt/backups/srv1.ext2  > └─/mnt/backups/srv2.ext2  

The main problem is the read/write speeds, they are very slow. Also, sometimes everything hangs and eats up all my cpu and ram. I can see the loop devices are causing that.

Lately, I've started switching the containers from ext4 to ext2, because I thought I didn't really need the journaling, hoping it would improve the speeds. I've also been switching from sparse-files to non-sparse files hoping it would lower the cpu/ram usage. But the problem persists, sometimes yields the system unresponsive.

Therefore, I'm looking for a better solution that has faster r/w speeds. Also, it's important to quickly see the disk space every profile uses (I can simply use df for now, du would be too slow). The separation the loop devices give is nice from a security standpoint, but could also be solved using rsync over ssh instead, so not a requirement.

I've been thinking about shrinking the main xfs partition and make the file containers real ext4 partitions, but that would bring huge amounts of downtime when the first partition needs to be resized. I've been thinking about using virt-make-fs or sqashfs, because I could simply get the filesize to get the disk usage, but have no experience with those.

Anybody any ideas if there's a better solution for this?

Weird Active Directory DNS Issue

Posted: 21 Nov 2021 03:03 PM PST

I am having a DNS issue I cannot figure out. For one specific hostname, when I create an A record, the name ends up changing when it replicates to the other DNS servers in AD.

We currently have two virtual DC/DNS servers & one physical DC/DNS server and replication is working between them. But for whatever reason when I create the record on any server, once it gets to the other ones it has an accent over one letter and the server I created it on has two entries. One with the accent and one without the accent but with the same IP.

There is only one record in the reverse lookup zone the IP is in, and I cannot create an A record or a CNAME without the accent on the servers the record is replicated to, Windows sees the text as the same.

My guess is somewhere on the servers is a remnant of the mistake I made when I initially created the record (copied and pasted without thinking) and that is causing the current issue. If anyone has any suggestion on where to look in order to fix this, I would be very grateful.

Orphaned Domain in Windows Forest - Unable to Connect to Cluster in Hyper-V Failover Cluster Manager

Posted: 21 Nov 2021 03:15 PM PST

Have a question here that pertains to an orphaned domain, specifically trying to connect to a Hyper-V cluster in Failover Cluster Manager.

We have a Windows forest with a root domain of domain.tld. Inside the forest there are 4 domains, something.domain.tld, other.domain.tld, etc., each with multiple domains except for one. So, other.domain.tld has just a single domain controller.

The domain controller for other.domain.tld is corrupt and will not boot, and following all the recovery methods put forth by Microsoft in their technet and community forums we are unable to recover the NTDS database. Also tried following a number of blogs and guides found on the Internet. Unfortunately, there are no backups of the server or checkpoints from prior to the server becoming corrupted.

The corrupted DC is hosted on an accessible Hyper-V cluster.

Within the other.domain.tld domain there are 2 Hyper-V compute-nodes which are connected to connect using Failover Cluster Manager, with a SAN as the storage-node. The cluster is currently running multiple VMs, but I am unable to connect to the cluster since both ADDS and DNS for the other.domain.tld domain is currently not available. Logging into the compute-nodes as a local admin also does not grant me the ability to admin or connect to the cluster. The cluster DNS address is also unknown at this time, as the previous technical team missed some items in their documentation processes. rough cluster layout

This is a mutli-part question:

  1. Can I disjoin the Hyper-V hosts from the current inaccessible domain and join them to a working domain without losing the cluster
  2. Is it possible to disconnect the VMs from the cluster so they are not managed by the cluster
  3. For the storage, should I expect any issues if I follow through with #1, or will the cluster storage still be available if I move the Hyper-V machines to a new domain and setup a new FCM cluster

I know how to purge orphaned domains within Windows Active Directory, just need to get to the point I can.

Thanks in advanced!

sed: customizing config file header with a defined length?

Posted: 21 Nov 2021 12:14 PM PST

I use sed to customize LXC container configuration files from the LXC host. That works well so far. When adjusting comment headers (hostname and date), there are aesthetic problems with the width of the headers, since when the hostname with different lengths are replaced, the total width of the heather is not automatically compensated for at the end.

in my example the string SERVER should be replaced.

############################  # RSYSLOG Konfiguration    #  # SERVER:/etc/rsyslog.conf #  # t12@RMS 2020-03-23       #  ############################  
############################  # RSYSLOG Konfiguration    #  # gersrv:/etc/rsyslog.conf   #  # t12@RMS 2020-04-23       #  ############################  
############################  # RSYSLOG Konfiguration    #  # sv4:/etc/rsyslog.conf   #  # t12@RMS 2020-06-23       #  ############################  

How can I get this with sed? Or do I need awk?

sed -i "s/SERVER/${servername}/g" /path to container/etc/rsyslog.conf  

GCP Console script to automatically migrate VPS to higher tier?

Posted: 21 Nov 2021 11:49 AM PST

Is there a sample gccloud script to migrate a machine from general-purpose to compute-optimised ? Its a webserver so I'd rather the new machine keeps the IP also. Downtime 10-20 min is OK.

Import MySql DB Dump Into New MariaDB

Posted: 21 Nov 2021 01:21 PM PST

I am on Ubuntu 20.04, trying to migrate from MySQL to MariaDB 10.5. I have mariadb installed correctly and I am trying to import the dump of all of my dbs in the new mariadb using mysql -u root -p < all_dbs.sql, but just it outputs:

ERROR 1005 (HY000) at line 87: Can't create table mysql.db (errno: 168 "Unknown (generic) error from engine")`

I am fairly new to database administration and I would appreciate some detailed instructions on how to solve this problem.

My Steps
1.) First, I dumped all my dbs into a .sql file mysqldump -u root -p --all-databases > all_dbs.sql
2.) Then, I removed the mysql server from ubuntu sudo apt purge mysql-server
3.) From here, I installed mariadb:
sudo apt update && sudo apt upgrade
sudo apt -y install software-properties-common
sudo apt-key adv --fetch-keys 'https://mariadb.org/mariadb_release_signing_key.asc'
sudo add-apt-repository 'deb [arch=amd64] http://mariadb.mirror.globo.tech/repo/10.5/ubuntu focal main'
sudo apt update
sudo apt install mariadb-server mariadb-client # I foolishly answered "no", since it was telling me it was safe to do so...
4.) I tried import my dbdumpfile.sql using mysql -u root -p < all_dbs.sql , but ran into this error 'ERROR 1698 (28000): Access denied for user 'root'@'localhost', so I used these instructions to solve that problem
5.) Which, of course, led to a new problem: Unknown collation: 'utf8mb4_0900_ai_ci' #1902, which I solved with sed -i all_dbs.sql -e 's/utf8mb4_0900_ai_ci/utf8mb4_unicode_ci/g'
6.)And -now- when I run mysql -u root -p < all_dbs.sql, it outputs

ERROR 1005 (HY000) at line 87: Can't create table mysql.db (errno: 168 "Unknown (generic) error from engine")

Any tips? Originally asked here (Did not Receive detailed answer..)

Where shared object is located in Linux

Posted: 21 Nov 2021 01:55 PM PST

I want to know, where .so file information got stored in linux? I am looking for libruby.so.2.6.

When I searched in internet, ld.so first starts the search with LD_LIBRARY_PATH and then it will look for ld.so.conf file and cache files and then the default paths like /lib and /usr/local/lib.

In my case ruby got installed in /opt/puppetlabs/puppet/root/bin location and when I executed ldd /opt/puppetlabs/puppet/root/bin, got the location of the libruby.so as /opt/puppetlabs/puppet/lib/libruby.so.2.6.

Now I am able to get the location of the shared object, but I would like to know from where it got the details? I have checked the ld_library_path and ld.so.conf file, I could find that entry. Could someone please help me to get this detail?

Elastic Beanstalk Health Degraded

Posted: 21 Nov 2021 01:05 PM PST

I am trying to to deploy an Node.js Docker image to Elastic Beanstalk using Travis CI. The tests and builds in Travis keep passing and successfully deploying however, I keep getting the following warn and error on my Elastic Beanstalk console

WARN: Environment health has transitioned from Info to Degraded. Incorrect application version found on all instances. Expected version "Sample Application" (deployment 1). Application update failed 31 seconds ago and took 15 minutes.

ERROR: During an aborted deployment, some instances may have deployed the new application version. To ensure all instances are running the same version, re-deploy the appropriate application version.

I am using the free tier so I am not sure if that's the issue or what exactly I am doing wrong. Below is what my .travis.yml file looks like:

  sudo: required  services:  - docker  before_install:  - docker build -t poolafrica/pool_auth -f Dockerfile.dev .  script:  - docker run poolafrica/pool_auth npm run test -- --coverage    deploy:    provider: elasticbeanstalk    edge: true    access_key_id: $AWS_ACCESS_ID    secret_access_key:      secure: $AWS_SECRET_KEY    region: eu-west-2    app: pool_auth    env: PoolAuth-env    bucket_name: elasticbeanstalk-eu-west-2-747115545713    on:      branch: master      skip_cleanup: true  

How do I boot the Debian 10 image with qemu/kvm?

Posted: 21 Nov 2021 03:01 PM PST

I'm attempting to boot the openstack image of Debian 10 using qemu and encountering an error detecting the harddrive, the end of the boot up sequence is showing:

[    0.989085] Run /init as init process  Loading, please wait...  Starting version 241  [    1.068365] SCSI subsystem initialized  [    1.073933] cryptd: max_cpu_qlen set to 1000  [    1.085586] AVX2 version of gcm_enc/dec engaged.  [    1.085699] PCI Interrupt Link [LNKA] enabled at IRQ 10  [    1.086342] AES CTR mode by8 optimization enabled  [    1.094169] scsi host0: ata_piix  [    1.095524] scsi host1: ata_piix  [    1.096120] ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc2c0 irq 14  [    1.097198] ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc2c8 irq 15  [    1.108170] PCI Interrupt Link [LNKB] enabled at IRQ 11  [    1.120402] virtio_blk virtio0: [vda] 736 512-byte logical blocks (377 kB/368 KiB)  Begin: Loading essential drivers ... done.  Begin: Running /scripts/init-premount ... done.  Begin: Mounting root file system ... Begin: Running /scripts/local-top ... done.  Begin: Running /scripts/local-premount ... done.  Begin: Waiting for root file system ... Begin: Running /scripts/local-block ... done.  Begin: Running /scripts/local-block ... done.  Begin: Running /scripts/local-block ... done.  ... line repeats ...  Begin: Running /scripts/local-block ... done.  Begin: Running /scripts/local-block ... done.  done.  Gave up waiting for root file system device.  Common problems:   - Boot args (cat /proc/cmdline)     - Check rootdelay= (did the system wait long enough?)   - Missing modules (cat /proc/modules; ls /dev)  ALERT!  UUID=77e3f255-2ef2-47bc-ad89-7cdbd65f5fbc does not exist.  Dropping to a shell!      BusyBox v1.30.1 (Debian 1:1.30.1-4) built-in shell (ash)  Enter 'help' for a list of built-in commands.  

From the initramfs prompt, I can see the following:

(initramfs) cat /proc/cmdline  BOOT_IMAGE=/boot/vmlinuz-4.19.0-5-cloud-amd64 root=UUID=77e3f255-2ef2-47bc-ad89-7cdbd65f5fbc ro biosdevname=0 net.ifnames=0 console=tty0 console=ttyS0,115200 earlyprintk=ttyS0,115200 consoleblank=0 systemd.show_status=true    (initramfs) cat /proc/modules  ata_generic 16384 0 - Live 0xffffffffc00ba000  crc32c_intel 24576 0 - Live 0xffffffffc00b3000  virtio_blk 20480 0 - Live 0xffffffffc00ad000  aesni_intel 200704 0 - Live 0xffffffffc0156000  ata_piix 36864 0 - Live 0xffffffffc0147000  aes_x86_64 20480 1 aesni_intel, Live 0xffffffffc0135000  crypto_simd 16384 1 aesni_intel, Live 0xffffffffc0130000  libata 245760 2 ata_generic,ata_piix, Live 0xffffffffc00da000  cryptd 28672 2 aesni_intel,crypto_simd, Live 0xffffffffc00d2000  glue_helper 16384 1 aesni_intel, Live 0xffffffffc00cb000  scsi_mod 237568 1 libata, Live 0xffffffffc0072000  virtio_pci 28672 0 - Live 0xffffffffc0066000  virtio_ring 28672 2 virtio_blk,virtio_pci, Live 0xffffffffc005b000  virtio 16384 2 virtio_blk,virtio_pci, Live 0xffffffffc0053000    (initramfs) ls /dev  block               tty18               tty5  char                tty19               tty50  console             tty2                tty51  core                tty20               tty52  cpu_dma_latency     tty21               tty53  disk                tty22               tty54  fd                  tty23               tty55  full                tty24               tty56  hpet                tty25               tty57  input               tty26               tty58  kmsg                tty27               tty59  mem                 tty28               tty6  memory_bandwidth    tty29               tty60  network_latency     tty3                tty61  network_throughput  tty30               tty62  null                tty31               tty63  psaux               tty32               tty7  ptmx                tty33               tty8  pts                 tty34               tty9  random              tty35               ttyS0  snapshot            tty36               ttyS1  stderr              tty37               ttyS2  stdin               tty38               ttyS3  stdout              tty39               urandom  tty                 tty4                vcs  tty0                tty40               vcs1  tty1                tty41               vcsa  tty10               tty42               vcsa1  tty11               tty43               vcsu  tty12               tty44               vcsu1  tty13               tty45               vda  tty14               tty46               vga_arbiter  tty15               tty47               zero  tty16               tty48  tty17               tty49    (initramfs) ls /dev/disk/by-label/  cidata    (initramfs) ls -al /dev/disk/by-uuid/  total 0  drwxr-xr-x    2 0        0               60 Jul 22 00:07 .  drwxr-xr-x    5 0        0              100 Jul 22 00:07 ..  lrwxrwxrwx    1 0        0                9 Jul 22 00:07 2019-07-21-20-02-09-00 -> ../../vda  

To recreate this VM, first I setup the qcow2 image off of the downloaded base image (downloaded from the openstack image site):

~/vm/deb10-test$ sha256sum ../base-debian-10/base.qcow2  d4c2966d996a3e08c198be41640d54b5d0c038cfc21b4d05e4b769824974daaf  ../base-debian-10/base.qcow2    ~/vm/deb10-test$ qemu-img create -f qcow2 -o "backing_file=../base-debian-10/base.qcow2" image.qcow2 20G  Formatting 'image.qcow2', fmt=qcow2 size=21474836480 backing_file=../base-debian-10/base.qcow2 encryption=off cluster_size=65536 lazy_refcounts=off refcount_bits=16  

Next I setup a seed.img for cloud-init using cloud-localds:

~/vm/deb10-test$ cat meta-data  instance-id: iid-deb10-test-20190721-193118  hostname: deb10-test  local-hostname: deb10-test    ~/vm/deb10-test$ cat network-config  ---  version: 1  config:  - type: physical    name: eth0    subnets:    - type: static      address: 192.168.237.42      netmask: 255.255.255.0      routes:      - network: 0.0.0.0        netmask: 0.0.0.0        gateway: 192.168.237.1  - type: nameserver    address: [192.168.234.10, 192.168.234.254, 8.8.8.8]    search: []    ~/vm/deb10-test$ cat user-data  #cloud-config  users:    - default  chpasswd:    list: |      debian:passw0rd    expire: False  ssh_pwauth: True  package_update: true  packages:  - python  bootcmd:  # disable automatic dhcp  - sed -e '/^#/! {/eth0/ s/^/# /}' -i /etc/network/interfaces    ~/vm/deb10-test$ cloud-localds -v ./seed.img --network-config network-config user-data meta-data  wrote ./seed.img with filesystem=iso9660 and diskformat=raw                                       

And lastly I ran virt-install with the following:

~/vm/deb10-test$ virt-install \    --os-variant auto --virt-type kvm \    --name deb10-test --graphics none --import \    --disk path="./image.qcow2",format=qcow2,bus=scsi \    --disk path="./seed.img",bus=virtio \    --cpu host --vcpus 2 --memory 2048 \    --network network=routed                                                

A similar procedure works for Debian 9 images, and changing to bus=virtio works for many other Linux images (CentOS and Ubuntu). I'm at a loss for why the harddrive device is not showing while the rest of the initramfs appears to be working. Are there different options I need to pass to work with Debian 10?


Edit: Attempting the following did not solve the issue:

  • --machine q35: no visible difference, from the documentation this doesn't appear to be needed with KVM.
  • --disk path=./seed.img,device=cdrom,bus=sata: no visible difference
  • --controller scsi,model=virtio-scsi: this actually broke the boot further, just hanging on a blank console, no grub, kernel booting, or initramfs prompt. Using model=auto got back to the initramfs prompt.

multiple URL based reverse proxy

Posted: 21 Nov 2021 12:07 PM PST

After quite some time searching the Internet, I'm still struggling to configure my Apache Proxy virtual-host.

My setup is quite simple:

  • server hosts several NodeJS-express apps
    • one hand-made REST API (listening on port 8080)
    • one adminMongo (listening on port 8081)
  • Apache listens on port 80 and is accessible at 10.8.0.1

Here's the Apache config file that's been close to working:

<VirtualHost *:80>      <Location /custom>          RewriteEngine on          RewriteRule ^/custom/(.*) /$1          ProxyPass http://localhost:8080/          ProxyPassReverse http://localhost:8080/      </Location>        <Location /mongo>          RewriteEngine on          RewriteRule ^/mongo/(.*) /$1          ProxyPass http://localhost:8081/          ProxyPassReverse http://localhost:8081/      </Location>  </Virtualhost>  

This Vhost was inspired by this post: Apache: proxy based on URL suffixes.

The main problem is that when I try to GET http://10.8.0.01/mongo/ I'm redirected to http://10.8.0.1/app/login/ (the express-app is doing this) which gives me a 404 error since my Apache has nothing to serve at /.

How can I match all of the URL beginning with /mongo/ to serve my app listening on http://127.0.0.1:8081/ ?

Any suggestion appreciated.

IIS | PHP Error: No input file specified

Posted: 21 Nov 2021 02:03 PM PST

Im running IIS 7.5 / PHP 7.0 CGI. If i open a non exist .php file in my browser, i get this error:

No input file specified.

IIS don't use the 404 Error Page for .php, like in .html files. I found some solutions, for example set doc_root in php.ini or comment out open_basedir .. but it won't help. I know it's a server issue, but not which.

The question is: Why i get "No input file specified." output, if i open a non exist .php file and not the IIS 404 Error Page?

Ubuntu 17.04 virt-clone ERROR missing source information for device sdx

Posted: 21 Nov 2021 12:07 PM PST

I am on Ubuntu 17.04 server using KVM
I tried to clone a vm after stopping it... I actually could not do a shutdown, so I did a destroy.

visrh destroy origVM  

Then I tried cloning using:

virt-clone --original origVM --name conleVM --file /var/lib/libvirt/images/cloneVM.img  

and I got:

ERROR missing source information for device sdx

So I tried

virt-clone --original origVM --auto-clone  

and I got:

ERROR Could not use path /dev/disk/by-id/.... for cloning don't know how to create storage path /dev/disk.by-id/.... Use libvirt APIs to manage the parent directory as a pool first.

I am thinking this is related to the fact that for origVM, I have dedicated 2 physical HDD from the host. I define these HDD in the xml by their disk/by-id number.

Could use guidance on how to deal with it.

lastLogon vs. lastLogonTimestamp in Active Directory

Posted: 21 Nov 2021 02:23 PM PST

An employee left the company. I try to find out when his AD account was logged in for the last time - if it was before the dismissal or after.

There are these 2 attributes in user properties window: lastLogon and lastLogonTimestamp. lastLogon date is earlier than the dismissal date, but lastLogonTimestamp date is posterior to the dismissal date (so in this case we would have a security problem).

How to know, which one of these attributes shows the actual last AD account login time? What is the difference between them?

user properties - attribute editor

Apache client denied by server configuration and wrong log

Posted: 21 Nov 2021 02:03 PM PST

I'm trying to configure a new virtual host with apache 2.4.16 Premise: I already have other virtual hosts and they work fine, so what I've done is simply duplicate the vhost and change paths and names.

The scenario is this one. I created a new vhost that contains this:

<VirtualHost *:80>  DocumentRoot "/Users/me/Sites/mynewsite/web"  ServerName mynewsite.lo    <Directory "/Users/me/Sites/mynewsite/web">          Require all granted          Options FollowSymLinks  </Directory>    ErrorLog /var/log/apache2/mynewsite.localhost-error.log  CustomLog /var/log/apache2/mynewsite.localhost-access.log combine  

At this point I tried to load the page mynewsite.lo/robots.txt ant I get this error

Forbidden    You don't have permission to access /robots.txt on this server.  

Now, I tried to solve the issue looking at the access_log, error_log but nothing was written there. So by ls -latr command I discovered that another logfile was changed, the myoldsite.localhost-error.log

Looking in that file I found

[Wed Oct 21 16:16:32.979200 2015] [authz_core:error] [pid 283] [client 127.0.0.1:56427] AH01630: client denied by server configuration: /Users/me/Sites/myoldsite  [Wed Oct 21 16:16:33.206456 2015] [authz_core:error] [pid 283] [client 127.0.0.1:56427] AH01630: client denied by server configuration: /Users/me/Sites/myoldsite, referer: http://mynewsite.lo/robots.txt  [Wed Oct 21 16:16:33.277496 2015] [authz_core:error] [pid 283] [client 127.0.0.1:56427] AH01630: client denied by server configuration: /Users/me/Sites/myoldsite, referer: http://mynewsite.lo/robots.txt  

Now I don't know what's happening. The logs are written to the wrong file and when I try to reach mynewsite.lo I get the forbidden error message.

What am I doing wrong?

Do CMD scripts run faster than BAT scripts?

Posted: 21 Nov 2021 01:05 PM PST

I recently heard from someone that Windows Admins should use CMD logon scripts over BAT logon scripts, as the run or execute faster. Apparently BAT scripts are notoriously slow.

I've done a bit of a google and I can't find any evidence to backup that claim. I'm just wondering if this is a myth or if anyone can know anymore about this?

lookup file name in sql and rename file

Posted: 21 Nov 2021 03:01 PM PST

I am trying to use a Powershell script to use the account number in a file name and re-name with the ID number from a sql database. Below is the code I am using to attempt this and I am not getting the results I need. Please let me now if you have any suggestion or advise in getting this to work.

Thanks!!

File name = 111119999.docx

Table =

ID AccountNumber

5555 111119999

## Select Data from Database    function Select-Info($CliRef)      {      $conn = new-object System.Data.SqlClient.SqlConnection      $connstring = "provider=sqloledb;data source=[vmsvr039];initial catalog=[crs5_oltp];integrated security=SSPI"          $conn.connectionstring = $connstring        $conn.open()      $query = "Select convert(varchar,cnsmr_accnt_idntfr_agncy_id) as ID FROM cnsmr_accnt WHERE cnsmr_accnt_crdtr_rfrnc_id_txt = '$CliRef'"      $cmd = New-Object System.Data.SqlClient.SqlCommand($sql,$conn)      $cmd.connection = $conn      $cmd.commandtext = $query      $result = $cmd.executenonquery()      $conn.close()        return $query      }    ## Return ID from Database    function Return-Info($CliRef)     {      $conn = New-Object System.Data.SqlClient.SqlConnection      $conn.ConnectionString = "Server=vmsvr039;Database=crs5_oltp;Integrated Security=SSPI;"      $conn.Open()      $cmd = New-Object System.Data.SqlClient.SqlCommand      $cmd.commandtext = "Select convert(varchar,cnsmr_accnt_idntfr_agncy_id) as ID FROM cnsmr_accnt WHERE cnsmr_accnt_crdtr_rfrnc_id_txt = '$CliRef'"      $cmd.connection = $conn      $result = $cmd.ExecuteScalar()      $conn.close()           return $result      }     ## Collect the file names     $FiNms = Get-ChildItem H:\ps\test -Name     ## Loop through each file name     foreach ($FiNm in $FiNms)       {     ## Variable for current File path      $file = "H:\ps\test\" + $FiNm     ##  Variable for new File path      $newFile = "H:\ps\renamed\" + $FiNm     $ID = Return-Info $CliRef   $ID = $ID + ".docx"   Copy-Item $file -Destination $newFile   Rename-Item $file $ID -force       }  

No comments:

Post a Comment