Saturday, May 7, 2022

Recent Questions - Server Fault

Recent Questions - Server Fault


Nginx not listening at local address

Posted: 07 May 2022 01:58 AM PDT

When I go to localhost on my pc, I can connect, but when I go to my router's public ip on the host pc, the page gets timed out. It works on my phone and I am able to see the website.

Here is my nginx configuration: (I've replaced the listen address with ***):

server {                  listen 80;                  server_name ***;                  index index.html index.php;                    access_log /var/log/nginx/localhost.access_log main;                  error_log /var/log/nginx/localhost.error_log info;                    root /var/www/localhost/htdocs;                  location ~ \.php$ {                          try_files $uri =404;                          fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock;                          fastcgi_index index.php;                          fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;                          include fastcgi_params;                  }                    listen 443 ssl; # managed by Certbot                  ssl_certificate /etc/letsencrypt/live/***/fullchain.pem; # managed by Certbot                  ssl_certificate_key /etc/letsencrypt/live/***/privkey.pem; # managed by Certbot                  include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot                  ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot                  if ($scheme != "https") {                      return 301 https://$host$request_uri;                  } # managed by Certbot                  #return 404; # managed by Certbot  }  

`nohup` does not work properly with `&&`

Posted: 07 May 2022 02:33 AM PDT

I want to make a delayed background execution, for delay I use sleep [anyseconds] && [execution] for background I use nohup

Simple example of nohup alone:

nohup date &>> out.log &

And of course you can find print out of the execution of date in out.log.

But I want to delay the execution, like 15 seconds, so a little combination:

nohup sleep 15 && date &>> out.log &

This time it does not work properly, sleep 15 got executed while the date will not always do - If you shut down the terminal, it will not execute anything after the &&.

What is wrong with nohup? It surrenders to &&??

Nginx rewrite rule to make php see https scheme

Posted: 07 May 2022 12:36 AM PDT

I want my pho application to only see the https scheme even if the secure connection is already terminated.

I have the following setup: Browser --https--> nginx --http--> nginx --> php-fpm socket

Now I need the php application to only note about the original https scheme request.

Is that even possible?

The only alternative I see is to make the nginx to nginx traffic also over https. But I want to avoid the overhead for local traffic.

BIND9 how to have 2 reverse resolutions for 2 different domains

Posted: 06 May 2022 11:47 PM PDT

I have one server Bind9 and 2 different domains.

I'ld like to have reverse resolution for each domain.

I've tried this configuration below but I get the error in named-checkconf :
/etc/bind/named.conf:30: zone '10.0.10.in-addr.arpa': already exists previous definition:/etc/bind/named.conf:19

My configuration :

zone "ngux.org" {      type master;      file "/etc/bind/db.ngux.org";      allow-transfer { 10.0.10.99; };  };  zone "10.0.10.in-addr.arpa" {      type master;      file "/etc/bind/db.ngux.org.rev";      allow-transfer { 10.0.10.99; };  };  zone "ldap.ngux.lan" {      type master;      file "/etc/bind/db.ldap.ngux.lan";      allow-transfer { 10.0.10.99; };  };  zone "10.0.10.in-addr.arpa" {      type master;      file "/etc/bind/db.ldap.ngux.lan.rev";      allow-transfer { 10.0.10.99; };  };  

What should I do ? Have only one file for the 2 reverses addrreses ?
Thanx.

How to increase the number of groups send by ADFS via SAML to Jenkins?

Posted: 06 May 2022 11:21 PM PDT

Yesterday we managed to integrate the CI Server Jenkins with Microsoft ADFS via SAML 2.0. When configuring the roles in Jenkins to the recieved groups of the user we noticed that only 80 groups are shown in the user profile in Jenkins. Looking in the logs it seems that only 80 groups were send via the SAML Repsonse. Unfortunately the groups with we use to manage the access control were not there. I assume because some limit of groups was reached that the remaining groups were left out. Is there any way to increase the number of groups send by ADFS? Or is Jenkins limiting the number of groups somehow? A read somewhere that ADFS tends to flatten nested LDAP groups, thats why this limit is reached.

What software options are available for setting up a file caching server?

Posted: 06 May 2022 10:36 PM PDT

To lower costs, we were planning on setting up a local caching layer for file serving for our game locally. We would have a 5Gb/S Up/Down fiber link. Ideally, if a file is missing, the software would download it and cache it and then forward it to the requesting user. I am new to this type of server and I wasn't sure if there was already an existing software solution for this task? Ideally, I would like to host it on a Mac or linux box, but if there is a good Windows only solution, I'd be open to that as well.

Total throughput tops out at around 4000 request per minute and 200MB per minute

RAID5 compatability: zeroed superblock: WD Red 8TB drives: wd80efax-68LHPN0 and 2x 68KNBN0

Posted: 06 May 2022 10:25 PM PDT

Summarizing the problem: I have been scraping by with JBOD for years, but finally need a real 'micro data center'. I bought 3 drives for my centos8-stream box over a couple months, but I have heard it could be both good and bad to get the same drives from same lot number. They were all WD Red TB drives: WD80EFAX*. But devil in the details, the first one was the helium filled wd80efax-68LHPN0, Mfg In July 2019, the later two on sale were air filled WD80EFAX-68KNBN0 from 2020. The cases looked different, but I proceeded anyway as they were the same major version and most retailers don't even list or differentiate the rest. Unfortunately my first attempts are not going well, and sure enough it is the lone helium filled one that seems to be not re-joining the mdadm/RAID array.

Details and any research: I am using this as a storage/NAS, not a webserver, for now. I don't need it to be available at boot, in fact, might not want that possibility depending on how the computer is being used that day. Maybe I'm having the evil maids/масаж over(иди нахуй!!1), and have to run out for a sec, which might turn into a day or two, but then need to re-join the Ukraine IT Army, and I have no trusted partner physically close that day, so I need to understand and adjust the configuration at any time. Not that I am that cool, but it would suck to have this unreliable array write at 1/10 the speed at the worst possible time. Привет Мир! Anyway,

I created my array as such: mdadm --create /dev/md0 --level=5 --raid-devices=3 /dev/sdb /dev/sdc /dev/sdd

skipped setting up a partition, as heard this is not needed if you just want one contiguous volume that you'll never change, and can make things more difficult, even. It seems to work fine except the one drive that won't associate with the array on its own.

cryptsetup luksFormat /dev/md0, to format my raid array for some luks type

opened, set up filesystem. For future flexibility, I chose LVM pvcreate /dev/mapper/devicename

verify successful creation with: pvs (looks good)

create volume group with vgcreate storage /dev/mapper/devname, and then put the lvm on that. lvcreate -n myLVMname -L 16T storageName

Actually had to use 14.55T as it appears some space is reserved or there is space miscalculation at this size as the discrepancy between TB and TiB grows. Anyway, I can see it's all good with: pvs, vgs and lvs.

Time to make the FS:

mkfs.ext4 /dev/mapper/devname, and at this point lsblk shows all my drives like so:

sde // disk

└─md0 // our raid

└─crypt-storage //enc container

└─storage-https //mountable vol  

with identical printout for /dev/sdd, and /dev/sdc. It is opened and mounted now.

opened /etc/fstab, can see `/dev/mapper/devicename is mounted at /run/media/username/moutpoint.

If I wanted it to mount at boot, at this point, I would need to do some extra things. But I don't, I want to mount it myself as needed. For good measure, create mdadm.conf with mdadm --verbose --detail --scan > /etc/mdadm.conf, though it should be able to rebuild the array with the data from the superblocks right?

If I want to mount at boot, I need to update /etc/crypttab with the UUID of my raid device /dev/md0, update my initramfs, with dracut -f -v, and update grub with grub2-mkconfig -o /etc/grub2-efi.cfg, or for centos, the config is usually linked to /boot/efi/EFI/centos/grub.cfg. However I did not update grub or crypttab since I didn't think I needed to/don't want to if don't have to, this should all happen way after boot right.

I rebooted, but lsblk shows problem: (trimmed)

` sdc
└─sdc1
sdd
└─md0
sde
└─md0

`

it seems that the helium filled wd80efax-68LHPN0 (/dev/sdc here) is not being put into the md0 array (even before the lvm on luks). It thinks there is a partition 'sdc1'. Is this due to a problem with how I configured it, different hardware or different firmware? like mentioned before, retailers often won't provide those last characters of the model number, so I would hope all WD80EFAX* would work together? WD doesn't seem to provide firmware for plain hard drives?? https://community.wd.com/t/firmware-wdc-wd80efzx-68uw8n0/218166/2 Is duckduckgo not finding it? (sometimes it can beat google, for sure the WD search function)

I have taken apart some logic boards before, and always wanted to mess with the firmware, but not while all of my data has just been moved (this was the backup) to this encrypted stack, with some previous drives already wiped to be used as large thumbdrives. Shame on me for not having another backup, it is still readable, but I am also resource constrained and trying to consolidate these 5+ JBOD drives so I can organize this never ending increase of data in my life to have time for more volunteer/business efforts. At least now I have learned the dreaded step of recovering/working on an encrypted raid array, to some degree.

When appropriate, describe what you've tried: I can stop the md0 device (mdadm --stop /dev/md0, add it: mdadm --add /dev/md0 /dev/sdc (and sdc1 though wouldn't be adding a partition)

(not re-add it, and afraid of using --assume-clean (heard if not an expert don't use)), but then it needs a 2 day re-sync every time I boot the machine which is needless wear not to mention downtime. Not good for production or really anything. Using mdadm --detail /dev/md0 state shows it is indeed 'degraded', one device is 'removed'. I can still unlock the two drives, mount, and read it, writing seems slower, but my data is now hanging off an encrypted cliff if these 2 remaining drives fail.

So I (re)added the same drive, let it sync, using watch on /proc/mdstat. I made sure I updated my initramfs (though I should not need to?) with dracut -f -v, copy all drive info with blkid just in case, reboot. Same issue, /dev/sdc (the older but same major model number helium drive) is not part of md0.

using mdadm -v --assemble /dev/md0 --uuid=<info from /etc/mdadm.conf> it scans, and curiously says: no RAID superblock found on /dev/sdc or /dev/sdc1, expected magic a92something, got 00000000*, for /dev/sdc1 (the "partition" (that shouldn't be there), not device), expected magic a92somethingRAIDUUIDthing, got 00000401.

The good drives : mdadm: /dev/sdf is identified as a member of /dev/md0, slot <1 or 2>

So I suspect the problem is here if it is something I did in the configuration. Some posts elsewhere suggest something might be wiping the superblock, but I am not aware of anything in my shutdown or reboot that would do this, but I have not done an evil maid audit in some time, nor do I want to be required to do one, and I would use the tools and notes on this machine to do so. It's probably not that, but its possible.

I could still update grub, or my /etc/crypttab, but it really seems I shouldn't need to do this, this is basically a giant RAID thumbdrive I'll sometimes need to hook up to the rest of the system. It seems something else screwy is going on relating to superblocks, hardware or firmware, especially since there are visible differences on the case and the board between the helium drive, and the two others that work fine (unfortunately).

any ideas?

like:

  • better examine my startup programs? (seems unlikely, but)
  • some way to flash new firmware to the drives? Though I would like OEM or open methods if possible. They read the same size blocks etc. right so this shouldn't matter anyway?
  • buy more drives until I find a match or go scour ebay and guess?? sounds expensive and wasteful
    • I could try guessing at the local computer store, but what is someone to do who doesn't have one of these?
    • what would I do with the helium drive after if it's useless as a backup drive for the array?
  • post on SuperUser or somewhere else? This place seemed great

not interested in:

  • booting windows (it doesn't play nice with Linux and completely hides my unsigned drivers for projects with override to get to the override buried in bios, because f*ck M$)
    • though I do have an 8.1 box for M$ Teams in need of recovery key, and some HDD cradles if I must
  • some sketchy closed-source hard drive utility even if handcoded ASM by Jesus Christ himself
  • getting rid of raid (I know there are horror stories, especially adding encryption, arguably not needed for a static system but depends on your threat env, but there's also tons of companies betting their business on them). I would like to learn from this one and build some more, now that this incident has forced me to learn a bunch about them.

There seem to be enough open tools and knowledge to figure this out, it isn't rocket science, but it is a ton of moving parts working together I might not be completely familiar with, after a few weeks of tinkering, crawling posts, and RTFMs, thought I should ask. I hope I have provided enough detail but not dragged it out. Thanks in advance!!

Unable to access Samba, Apache2 on Ubuntu Server

Posted: 06 May 2022 09:07 PM PDT

I've got a PC running Ubuntu 20.04 that I used as my home server with a VM, Samba, webserver, DDNS, SSH etc but randomly it just stopped showing up on my local network for half the services.

I can no longer connect to my Samba shares from any devices (connection timed out) but locally on the machine via 127.0.0.1 they appear just fine. Apache no longer works, I can't access my webpage internally on the network or externally (HTTP port 80).

The machine can access the internet just fine, can be pinged from other devices and I can SSH in just fine, I haven't changed anything on the router and it shows up just fine under 192.168.1.3. I can also perform remote iperf3 tests to it just fine so it isn't a router/ISP issue I believe (although the results are much slower than they should be)

All the services mentioned above are running on the machine fine, I reinstalled apache2 and then tried NGINX and can confirm they both bind to port 80 with netstat -plant | grep 80 which gave tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 229301/nginx: maste

I'm totally stuck as to what is causing this, everything else on my network works fine and the only settings I have for this PC are a couple of external port forwards for 80, 433 and 5201 setup on the router. I'm currently connected to my home network over VPN but the issues are present over VPN and when connected to local Wi-Fi.

Any help to solve this would be greatly appreciated :) Thanks

ifconfig gives:

eno1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500  inet 192.168.1.3  netmask 255.255.255.0  broadcast 192.168.1.255  inet6 fe80::5a95:86f3:75e2:9524  prefixlen 64  scopeid 0x20<link>  ether 4c:72:b9:57:42:b7  txqueuelen 1000  (Ethernet)  RX packets 5399190  bytes 1320272608 (1.3 GB)  RX errors 0  dropped 0  overruns 0  frame 0  TX packets 1003884  bytes 425572182 (425.5 MB)  TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0  device interrupt 20  memory 0xf7c00000-f7c20000    lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536  inet 127.0.0.1  netmask 255.0.0.0  inet6 ::1  prefixlen 128  scopeid 0x10<host>  loop  txqueuelen 1000  (Local Loopback)  RX packets 353136  bytes 37884073 (37.8 MB)  RX errors 0  dropped 0  overruns 0  frame 0  TX packets 353136  bytes 37884073 (37.8 MB)  TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0  

Is HAProxy in front of Stunnel with SNIs possible?

Posted: 06 May 2022 09:45 PM PDT

I have a working SSL Termination with STunnel in front of HAproxy.

Recently, the matter of adding support for HTTP/2 was thrown my way. That is easy with HAProxy, but, as a constraint, STunnel must stay.

The reason for STunnel needing to stay is about 17000 lines of SNIs and the possibility of managing those via an already in place API.

I could very well add a cert-list for HAProxy containing the SNIs, a couple of greps and echos will do the tick.

However, during my searches I haven't yet found anyone putting HAProxy in front of STunnel in front of HAProxy. Is that the wrong approach?

Here's what I already started working on (no SNIs in there yet - 17000 of them would be a bit too much for a post):

HAProxy frontend (where I need to add HTTP/2 support) with encryption towards STunnel:

listen frontend  bind 192.168.1.100:443 transparent    mode http    server stunnel 127.0.0.100:443 ssl verify none  

STunnel

[STunnel]      cert = /etc/ssl/certs/cert.pem      ciphers =  ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256  -SHA384:DHE-DSS-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA  256:AES256-GCM-SHA384:AES256-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-  RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:DHE-DSS-AES128-GCM-SHA256:DHE-RSA-AES128-GCM-SHA256:DHE-  RSA-AES128-SHA256:DHE-DSS-AES128-SHA256:AES128-GCM-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA      accept = 127.0.0.100:443      connect = 127.0.0.100:80      delay = yes      options = NO_SSLv3      options = NO_TLSv1      options = NO_TLSv1.1      options = NO_TLSv1.3      options = CIPHER_SERVER_PREFERENCE      options = DONT_INSERT_EMPTY_FRAGMENTS      renegotiation = no      protocol = proxy      local = 127.0.0.100      TIMEOUTclose = 0  

HAProxy "backend"

listen Web      bind 127.0.0.100:80 transparent accept-proxy      mode http      balance leastconn      acl SSL-443 src 127.0.0.100      tcp-request connection expect-proxy layer4 if STunnel      option http-keep-alive      timeout http-request 5s      timeout tunnel 1h      option redispatch      option abortonclose      maxconn 40000      option httplog      server server1 192.168.1.98:80  check      server server2 192.168.1.99:80  check  

I assumed encryption is required from HAProxy to STunnel, and I would need to account for any protocol mismatches between those.

What the STunnel verion of HAProxy's tcp-request connection expect-proxy layer4 if STunnel would be?

Any help in getting HTTP/2 support with STunnel is greatly appreciated, as well as getting a "Don't do that, it's wrong".

Thank you,

Only federate some users in AzureAD and not a whole domain

Posted: 07 May 2022 01:33 AM PDT

We want to test a new IDP in our organization ( this IDP is an inhouse SAML-compatible idp ). We are using AzureAD.

If we federate a new domain, we can test the authentication, and it works ( xxx@NewDomain.Com).

Now, we want to select some real users from our main domain ( User1@MainDomain.com ), and federate only these users so that they can start testing the idp without interrupting all the other users. Is this possible? Can we federate only some users to use an IDP in AzureAD, or it must be always a whole domain ?

Our goal is to achieve a gradual migration of the users, so that we can fix eventual first bugs with minimal impact.

ZFS and SAN: issue with data scrubbing

Posted: 07 May 2022 01:12 AM PDT

Working as scientists in a corporate environment, we are provided with storage resources from a SAN within an Ubuntu 20.04 virtual machine (Proxmox). The SAN controller is passed directly to the VM (PCIe passthrough).

The SAN itself uses hardware Raid 60 (no other option is given to us), and presents us with 380 TB that we can split in a number of LUNs. We would like to benefit from ZFS compression and snapshotting features. We have opted for 30 x 11 TB LUNs that we then organized as striped RAID-Z. The setup is redundant (two servers), we have backups and performance is good which oriented us towards striped RAID-Z in favor of the usual striped mirrors.

Independent on the ZFS geometry, we have noticed that a high writing load (> 1 GB/s) during ZFS scrubs results in disk errors, leading eventually to faulted devices. By looking at the files presenting errors we could link this problem to the scrubbing process trying to access data still present in the cache of the SAN. With moderate loads during the scrub the process completes without any errors.

Are there configuration parameters either for ZFS or for multipath that can be tuned within the VM to prevent this issue with the SAN cache?

Output of zpool status

  pool: sanpool   state: ONLINE    scan: scrub repaired 0B in 2 days 02:05:53 with 0 errors on Thu Mar 17 15:50:34 2022  config:        NAME                                        STATE     READ WRITE CKSUM      sanpool                                     ONLINE       0     0     0        raidz1-0                                  ONLINE       0     0     0          wwn-0x60060e8012b003005040b0030000002e  ONLINE       0     0     0          wwn-0x60060e8012b003005040b0030000002f  ONLINE       0     0     0          wwn-0x60060e8012b003005040b00300000031  ONLINE       0     0     0          wwn-0x60060e8012b003005040b00300000032  ONLINE       0     0     0          wwn-0x60060e8012b003005040b00300000033  ONLINE       0     0     0        raidz1-1                                  ONLINE       0     0     0          wwn-0x60060e8012b003005040b00300000034  ONLINE       0     0     0          wwn-0x60060e8012b003005040b00300000035  ONLINE       0     0     0          wwn-0x60060e8012b003005040b00300000036  ONLINE       0     0     0          wwn-0x60060e8012b003005040b00300000037  ONLINE       0     0     0          wwn-0x60060e8012b003005040b00300000038  ONLINE       0     0     0        raidz1-2                                  ONLINE       0     0     0          wwn-0x60060e8012b003005040b00300000062  ONLINE       0     0     0          wwn-0x60060e8012b003005040b00300000063  ONLINE       0     0     0          wwn-0x60060e8012b003005040b00300000064  ONLINE       0     0     0          wwn-0x60060e8012b003005040b00300000065  ONLINE       0     0     0          wwn-0x60060e8012b003005040b00300000066  ONLINE       0     0     0        raidz1-3                                  ONLINE       0     0     0          wwn-0x60060e8012b003005040b0030000006a  ONLINE       0     0     0          wwn-0x60060e8012b003005040b0030000006b  ONLINE       0     0     0          wwn-0x60060e8012b003005040b0030000006c  ONLINE       0     0     0          wwn-0x60060e8012b003005040b0030000006d  ONLINE       0     0     0          wwn-0x60060e8012b003005040b0030000006f  ONLINE       0     0     0        raidz1-4                                  ONLINE       0     0     0          wwn-0x60060e8012b003005040b00300000070  ONLINE       0     0     0          wwn-0x60060e8012b003005040b00300000071  ONLINE       0     0     0          wwn-0x60060e8012b003005040b00300000072  ONLINE       0     0     0          wwn-0x60060e8012b003005040b00300000073  ONLINE       0     0     0          wwn-0x60060e8012b003005040b00300000074  ONLINE       0     0     0        raidz1-5                                  ONLINE       0     0     0          wwn-0x60060e8012b003005040b00300000075  ONLINE       0     0     0          wwn-0x60060e8012b003005040b00300000076  ONLINE       0     0     0          wwn-0x60060e8012b003005040b00300000077  ONLINE       0     0     0          wwn-0x60060e8012b003005040b00300000079  ONLINE       0     0     0          wwn-0x60060e8012b003005040b0030000007a  ONLINE       0     0     0    errors: No known data errors  

Output of multipath -ll

mpathr (360060e8012b003005040b00300000074) dm-18 HITACHI,OPEN-V  size=11T features='0' hwhandler='0' wp=rw  `-+- policy='service-time 0' prio=1 status=active    |- 7:0:0:25 sdz  65:144 active ready running    `- 8:0:0:25 sdbd 67:112 active ready running  mpathe (360060e8012b003005040b00300000064) dm-5 HITACHI,OPEN-V  size=11T features='0' hwhandler='0' wp=rw  `-+- policy='service-time 0' prio=1 status=active    |- 7:0:0:13 sdn  8:208  active ready running    `- 8:0:0:13 sdar 66:176 active ready running  mpathq (360060e8012b003005040b00300000073) dm-17 HITACHI,OPEN-V  size=11T features='0' hwhandler='0' wp=rw  `-+- policy='service-time 0' prio=1 status=active    |- 7:0:0:24 sdy  65:128 active ready running    `- 8:0:0:24 sdbc 67:96  active ready running  mpathd (360060e8012b003005040b00300000063) dm-4 HITACHI,OPEN-V  size=11T features='0' hwhandler='0' wp=rw  `-+- policy='service-time 0' prio=1 status=active    |- 7:0:0:12 sdm  8:192  active ready running    `- 8:0:0:12 sdaq 66:160 active ready running  mpathp (360060e8012b003005040b00300000072) dm-16 HITACHI,OPEN-V  size=11T features='0' hwhandler='0' wp=rw  `-+- policy='service-time 0' prio=1 status=active    |- 7:0:0:23 sdx  65:112 active ready running    `- 8:0:0:23 sdbb 67:80  active ready running  mpathc (360060e8012b003005040b00300000062) dm-3 HITACHI,OPEN-V  size=11T features='0' hwhandler='0' wp=rw  `-+- policy='service-time 0' prio=1 status=active    |- 7:0:0:11 sdl  8:176  active ready running    `- 8:0:0:11 sdap 66:144 active ready running  mpatho (360060e8012b003005040b00300000071) dm-15 HITACHI,OPEN-V  size=11T features='0' hwhandler='0' wp=rw  `-+- policy='service-time 0' prio=1 status=active    |- 7:0:0:22 sdw  65:96  active ready running    `- 8:0:0:22 sdba 67:64  active ready running  mpathb (360060e8012b003005040b00300000038) dm-2 HITACHI,OPEN-V  size=11T features='0' hwhandler='0' wp=rw  `-+- policy='service-time 0' prio=1 status=active    |- 7:0:0:10 sdk  8:160  active ready running    `- 8:0:0:10 sdao 66:128 active ready running  mpathn (360060e8012b003005040b00300000070) dm-14 HITACHI,OPEN-V  size=11T features='0' hwhandler='0' wp=rw  `-+- policy='service-time 0' prio=1 status=active    |- 7:0:0:21 sdv  65:80  active ready running    `- 8:0:0:21 sdaz 67:48  active ready running  mpatha (360060e8012b003005040b0030000002e) dm-1 HITACHI,OPEN-V  size=11T features='0' hwhandler='0' wp=rw  `-+- policy='service-time 0' prio=1 status=active    |- 7:0:0:1  sdb  8:16   active ready running    `- 8:0:0:1  sdaf 65:240 active ready running  mpathz (360060e8012b003005040b00300000033) dm-26 HITACHI,OPEN-V  size=11T features='0' hwhandler='0' wp=rw  `-+- policy='service-time 0' prio=1 status=active    |- 7:0:0:5  sdf  8:80   active ready running    `- 8:0:0:5  sdaj 66:48  active ready running  mpathm (360060e8012b003005040b0030000006f) dm-13 HITACHI,OPEN-V  size=11T features='0' hwhandler='0' wp=rw  `-+- policy='service-time 0' prio=1 status=active    |- 7:0:0:20 sdu  65:64  active ready running    `- 8:0:0:20 sday 67:32  active ready running  mpathy (360060e8012b003005040b00300000032) dm-25 HITACHI,OPEN-V  size=11T features='0' hwhandler='0' wp=rw  `-+- policy='service-time 0' prio=1 status=active    |- 7:0:0:4  sde  8:64   active ready running    `- 8:0:0:4  sdai 66:32  active ready running  mpathl (360060e8012b003005040b0030000002f) dm-12 HITACHI,OPEN-V  size=11T features='0' hwhandler='0' wp=rw  `-+- policy='service-time 0' prio=1 status=active    |- 7:0:0:2  sdc  8:32   active ready running    `- 8:0:0:2  sdag 66:0   active ready running  mpathx (360060e8012b003005040b0030000007a) dm-24 HITACHI,OPEN-V  size=11T features='0' hwhandler='0' wp=rw  `-+- policy='service-time 0' prio=1 status=active    |- 7:0:0:30 sdae 65:224 active ready running    `- 8:0:0:30 sdbi 67:192 active ready running  mpathad (360060e8012b003005040b00300000037) dm-30 HITACHI,OPEN-V  size=11T features='0' hwhandler='0' wp=rw  `-+- policy='service-time 0' prio=1 status=active    |- 7:0:0:9  sdj  8:144  active ready running    `- 8:0:0:9  sdan 66:112 active ready running  mpathk (360060e8012b003005040b0030000006d) dm-11 HITACHI,OPEN-V  size=11T features='0' hwhandler='0' wp=rw  `-+- policy='service-time 0' prio=1 status=active    |- 7:0:0:19 sdt  65:48  active ready running    `- 8:0:0:19 sdax 67:16  active ready running  mpathw (360060e8012b003005040b00300000031) dm-23 HITACHI,OPEN-V  size=11T features='0' hwhandler='0' wp=rw  `-+- policy='service-time 0' prio=1 status=active    |- 7:0:0:3  sdd  8:48   active ready running    `- 8:0:0:3  sdah 66:16  active ready running  mpathac (360060e8012b003005040b00300000036) dm-29 HITACHI,OPEN-V  size=11T features='0' hwhandler='0' wp=rw  `-+- policy='service-time 0' prio=1 status=active    |- 7:0:0:8  sdi  8:128  active ready running    `- 8:0:0:8  sdam 66:96  active ready running  mpathj (360060e8012b003005040b0030000006c) dm-10 HITACHI,OPEN-V  size=11T features='0' hwhandler='0' wp=rw  `-+- policy='service-time 0' prio=1 status=active    |- 7:0:0:18 sds  65:32  active ready running    `- 8:0:0:18 sdaw 67:0   active ready running  mpathv (360060e8012b003005040b00300000079) dm-22 HITACHI,OPEN-V  size=11T features='0' hwhandler='0' wp=rw  `-+- policy='service-time 0' prio=1 status=active    |- 7:0:0:29 sdad 65:208 active ready running    `- 8:0:0:29 sdbh 67:176 active ready running  mpathab (360060e8012b003005040b00300000035) dm-28 HITACHI,OPEN-V  size=11T features='0' hwhandler='0' wp=rw  `-+- policy='service-time 0' prio=1 status=active    |- 7:0:0:7  sdh  8:112  active ready running    `- 8:0:0:7  sdal 66:80  active ready running  mpathi (360060e8012b003005040b0030000006b) dm-9 HITACHI,OPEN-V  size=11T features='0' hwhandler='0' wp=rw  `-+- policy='service-time 0' prio=1 status=active    |- 7:0:0:17 sdr  65:16  active ready running    `- 8:0:0:17 sdav 66:240 active ready running  mpathu (360060e8012b003005040b00300000077) dm-21 HITACHI,OPEN-V  size=11T features='0' hwhandler='0' wp=rw  `-+- policy='service-time 0' prio=1 status=active    |- 7:0:0:28 sdac 65:192 active ready running    `- 8:0:0:28 sdbg 67:160 active ready running  mpathaa (360060e8012b003005040b00300000034) dm-27 HITACHI,OPEN-V  size=11T features='0' hwhandler='0' wp=rw  `-+- policy='service-time 0' prio=1 status=active    |- 7:0:0:6  sdg  8:96   active ready running    `- 8:0:0:6  sdak 66:64  active ready running  mpathh (360060e8012b003005040b0030000006a) dm-8 HITACHI,OPEN-V  size=11T features='0' hwhandler='0' wp=rw  `-+- policy='service-time 0' prio=1 status=active    |- 7:0:0:16 sdq  65:0   active ready running    `- 8:0:0:16 sdau 66:224 active ready running  mpatht (360060e8012b003005040b00300000076) dm-20 HITACHI,OPEN-V  size=11T features='0' hwhandler='0' wp=rw  `-+- policy='service-time 0' prio=1 status=active    |- 7:0:0:27 sdab 65:176 active ready running    `- 8:0:0:27 sdbf 67:144 active ready running  mpathg (360060e8012b003005040b00300000066) dm-7 HITACHI,OPEN-V  size=11T features='0' hwhandler='0' wp=rw  `-+- policy='service-time 0' prio=1 status=active    |- 7:0:0:15 sdp  8:240  active ready running    `- 8:0:0:15 sdat 66:208 active ready running  mpaths (360060e8012b003005040b00300000075) dm-19 HITACHI,OPEN-V  size=11T features='0' hwhandler='0' wp=rw  `-+- policy='service-time 0' prio=1 status=active    |- 7:0:0:26 sdaa 65:160 active ready running    `- 8:0:0:26 sdbe 67:128 active ready running  mpathf (360060e8012b003005040b00300000065) dm-6 HITACHI,OPEN-V  size=11T features='0' hwhandler='0' wp=rw  `-+- policy='service-time 0' prio=1 status=active    |- 7:0:0:14 sdo  8:224  active ready running    `- 8:0:0:14 sdas 66:192 active ready running  

Errors on a zpool filesystem

Posted: 07 May 2022 01:13 AM PDT

I'm using ZFS on a Debian 9 machine. This machine has been working for years without any problem until today.

The zfs pool is mounted on top of a RAID system, controlled by hardware (so only one drive is exposed to Linux as sda). You can see the output of "zpool status" below.

Before continuing, just mention that I checked the consistency of the RAID, and everything is fine.

Suddenly, all accesses to the filesystem provoke the command to freeze (even an ls command), and eventually, I need to reboot the machine manually.

When running zpool status -v, the output is:

#/sbin/zpool status -v    pool: export   state: ONLINE  status: One or more devices has experienced an error resulting in data          corruption.  Applications may be affected.  action: Restore the file in question if possible.  Otherwise restore the          entire pool from backup.     see: http://zfsonlinux.org/msg/ZFS-8000-8A    scan: scrub repaired 0B in 53h4m with 0 errors on Tue Mar 15 05:28:38 2022  config:            NAME        STATE     READ WRITE CKSUM          export      ONLINE       0     0     0            sda       ONLINE       0     0     0    errors: Permanent errors have been detected in the following files:            export/home:<0x0>          export/home:<0x2b2ed23>          export/home:<0x2e1183b>          export/home:<0x2b2e849>          export/home:<0x1d0b5b1>  

So, the main question is: What is the meaning of those files? How do I fix this problem?

Thank you in advance!

How to enter "special" characters in the password file?

Posted: 07 May 2022 12:17 AM PDT

What is the range of characters allowed in the password field in the password.client file in Exim4?

My password has the :, ! and . characters. Are these permitted as is? If not, how do I encode them?

PS: The credentials are for Exim as a client to a "smarthost".

snort3 Undefined variable in the string: HOME_NET

Posted: 07 May 2022 01:08 AM PDT

I have installed snort3 on my ubuntu server using this URL from the snort web site:

Snort 3.0.1 on Ubuntu 18 & 20

I have compiled it according to the instructions and edited /usr/local/etc/snort/snort.lua to add my HOME_NET and other variables as per the document.

Once I enable the snort3-community.rules I see these errors.

Finished /usr/local/etc/snort/snort.lua:                                                                                                                                                                                                        Loading ips.rules:                                                                                                                                                                                                                              Loading /usr/local/etc/rules/local.rules:                                                                                                                                                                                                       Finished /usr/local/etc/rules/local.rules:                                                                                                                                                                                                      Loading /usr/local/etc/rules/snort3-community.rules:                                                                                                                                                                                            ERROR: /usr/local/etc/rules/snort3-community.rules:1778 Undefined variable in the string: $HOME_NET.                                                                                                                                            ERROR: /usr/local/etc/rules/snort3-community.rules:1778 undefined variable in the string: $EXTERNAL_NET.                                                                                                                                        FATAL: /usr/local/etc/rules/snort3-community.rules:1778 ***PortVar Lookup failed on '$HTTP_PORTS'.  

These variables are defined in: -

  • /usr/local/etc/snort/snort.lua
    HOME_NET = [[ 10.0.0.0/24 192.168.0.0/24 ]]      EXTERNAL_NET = 'any'  
  • /usr/local/etc/snort/snort_defaults.lua
    HTTP_PORTS =  [[      80 81 311 383 591 593 901 1220 1414 1741 1830 2301 2381 2809 3037 3128      3702 4343 4848 5250 6988 7000 7001 7144 7145 7510 7777 7779 8000 8008      8014 8028 8080 8085 8088 8090 8118 8123 8180 8181 8243 8280 8300 8800      8888 8899 9000 9060 9080 9090 9091 9443 9999 11371 34443 34444 41080      50002 55555   ]]  

But are not seen in the rules? Can anyone suggest why.

Dual Gateway Setup in Mikrotik

Posted: 06 May 2022 11:54 PM PDT

I'm new to Mikrotik environment, and I need some help for the following scenario:

  • I have an ADSL router (main internet connection) with IP range of 192.168.1.0/24, connected to Ethernet 1 of my Mikrotik router (WAN Port)

  • I have another ADSL router (VPN connection to connect to main branch) with IP range of 172.200.1.0/24, connected to Ethernet 2 of my Mikrotik router

  • I have WiFi enabled Mikrotik as Ap bridge with IP range of 192.168.88.0/24 (everyone connect to this router using WiFi and physical connection)

What I want to do is as follows:

  • When people want to access the Internet, the Mikrotik router should route packets automatically to Ethernet 1 interface (first ADSL).

  • If people want to go to certain destinations (e.g. 221.35.12.x) their packet has to be routed to Ethernet 2, which is the Second ADSL to connect to main branch.

Additional information

The gateway for first ADSL is 192.168.1.1, and for second one is 172.200.1.17.

So far, I have managed to access the gateway of the second ADSL, but when I ping the actual destination address of 221.35.12.x, it returns unreachable and when I tracert that address, it shows the packet goes to 192.168.88.1 and from there drops.

Can anyone help for the above scenario with a complete solution?

Connecting Google Cloud Functions across Projects

Posted: 06 May 2022 11:08 PM PDT

I am using Google Cloud Functions and have multiple projects with cloud functions, that need to communicate with each other. My problem is that functions can only communicate with each other if they have Ingress settings set to "allow all traffic." As soon as I change it to the desired setting, which is "Allow internal Traffic Only" projectB can't talk to projectA. The two projects are Firebase projects which have a VPC network configured as well as Serverless VPC in order to communicate with a back end database.

From what I can tell, Google is saying this I should create a VPC SC Perimeter which includes all the projects that need to talk to each other, this is meant to solve the problem. I have done that but I still have access issues if set to "allow internal traffic only"

I also tried setting up a vpc network with a static private ip address . From projectB I then tried to communicate to ProkectA on the private IP but I am getting timeout errors.

Both projectA and projectB have vpc set up with internal private ip's.

I also tried using VPC peering between the projects, but still get the timeout issue.

Could anyone offer any advice?

Ubuntu 20.04 time sync problems and possibly incorrect status information

Posted: 06 May 2022 11:58 PM PDT

I have been having some problems with crashes on my KVM host (Lubuntu 20.04), and when troubleshooting, I noticed some time-related errors.

Upon further investigation, to my horror, I saw that time was not being synced. I am sure it was set up before, I have no clue how it became un-setup.

admin@virtland:~$ sudo timedatectl  [sudo] password for admin:                  Local time: Fri 2020-07-10 09:14:14 EDT               Universal time: Fri 2020-07-10 13:14:14 UTC                     RTC time: Fri 2020-07-10 13:14:14                        Time zone: America/New_York (EDT, -0400)  System clock synchronized: no                                           NTP service: n/a                                      RTC in local TZ: no                             admin@virtland:~$   

I found this thread and tried the top answer, but no no avail. https://askubuntu.com/questions/929805/timedatectl-ntp-sync-cannot-set-to-yes

admin@virtland:~$ sudo systemctl stop ntp  admin@virtland:~$ sudo ntpd -gq  10 Jul 09:17:57 ntpd[34358]: ntpd 4.2.8p12@1.3728-o (1): Starting  10 Jul 09:17:57 ntpd[34358]: Command line: ntpd -gq  10 Jul 09:17:57 ntpd[34358]: proto: precision = 0.070 usec (-24)  10 Jul 09:17:57 ntpd[34358]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): good hash signature  10 Jul 09:17:57 ntpd[34358]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): loaded, expire=2020-12-28T00:00:00Z last=2017-01-01T00:00:00Z ofs=37  10 Jul 09:17:57 ntpd[34358]: Listen and drop on 0 v6wildcard [::]:123  10 Jul 09:17:57 ntpd[34358]: Listen and drop on 1 v4wildcard 0.0.0.0:123  10 Jul 09:17:57 ntpd[34358]: Listen normally on 2 lo 127.0.0.1:123  10 Jul 09:17:57 ntpd[34358]: Listen normally on 3 enp6s0 10.0.0.18:123  10 Jul 09:17:57 ntpd[34358]: Listen normally on 4 lo [::1]:123  10 Jul 09:17:57 ntpd[34358]: Listen normally on 5 enp6s0 [fe80::7285:c2ff:fe65:9f19%3]:123  10 Jul 09:17:57 ntpd[34358]: Listening on routing socket on fd #22 for interface updates  10 Jul 09:17:58 ntpd[34358]: Soliciting pool server 209.50.63.74  10 Jul 09:17:59 ntpd[34358]: Soliciting pool server 4.53.160.75  10 Jul 09:18:00 ntpd[34358]: Soliciting pool server 69.89.207.199  10 Jul 09:18:00 ntpd[34358]: Soliciting pool server 72.30.35.88  10 Jul 09:18:01 ntpd[34358]: Soliciting pool server 173.0.48.220  10 Jul 09:18:01 ntpd[34358]: Soliciting pool server 162.159.200.1  10 Jul 09:18:01 ntpd[34358]: Soliciting pool server 108.61.73.243  10 Jul 09:18:02 ntpd[34358]: Soliciting pool server 208.79.89.249  10 Jul 09:18:02 ntpd[34358]: Soliciting pool server 208.67.75.242  10 Jul 09:18:02 ntpd[34358]: Soliciting pool server 91.189.94.4  10 Jul 09:18:03 ntpd[34358]: Soliciting pool server 91.189.89.198  10 Jul 09:18:03 ntpd[34358]: Soliciting pool server 67.217.112.181  10 Jul 09:18:04 ntpd[34358]: Soliciting pool server 91.189.89.199  10 Jul 09:18:04 ntpd[34358]: Soliciting pool server 64.225.34.103               10 Jul 09:18:05 ntpd[34358]: Soliciting pool server 91.189.91.157               10 Jul 09:18:06 ntpd[34358]: Soliciting pool server 2001:67c:1560:8003::c8      10 Jul 09:18:06 ntpd[34358]: ntpd: time slew +0.001834 s                        ntpd: time slew +0.001834s                                                      admin@virtland:~$ sudo service ntp start                                         admin@virtland:~$ sudo timedatectl                                              Local time: Fri 2020-07-10 09:18:21 EDT                                     Universal time: Fri 2020-07-10 13:18:21 UTC                                           RTC time: Fri 2020-07-10 13:18:21                                              Time zone: America/New_York (EDT, -0400)                        System clock synchronized: no                                                                 NTP service: n/a                                                            RTC in local TZ: no                                                   admin@virtland:~$                    

I thought maybe I needed to use some more up-to-date instructions, so I tried this: https://linuxconfig.org/how-to-sync-time-on-ubuntu-20-04-focal-fossa-linux

admin@virtland:~$ timedatectl set-ntp off  Failed to set ntp: NTP not supported                                            admin@virtland:~$ timedatectl set-ntp on  Failed to set ntp: NTP not supported                                            

Then I tried this, from a different thread:

admin@virtland:~$ sudo systemctl status systemd-timesyncd.service  [sudo] password for admin:   ● systemd-timesyncd.service       Loaded: masked (Reason: Unit systemd-timesyncd.service is masked.)       Active: inactive (dead)  

I have never touched timesyncd.conf, but it is entirely commented out anyway:

cat /etc/systemd/timesyncd.conf                           #  This file is part of systemd.                                                #                                                                               #  systemd is free software; you can redistribute it and/or modify it           #  under the terms of the GNU Lesser General Public License as published by     #  the Free Software Foundation; either version 2.1 of the License, or          #  (at your option) any later version.                                          #                                                                               # Entries in this file show the compile time defaults.                          # You can change settings by editing this file.                                 # Defaults can be restored by simply deleting this file.                        #                                                                               # See timesyncd.conf(5) for details.                                                                                                                            [Time]                                                                          #NTP=                                                                           #FallbackNTP=ntp.ubuntu.com                                                     #RootDistanceMaxSec=5                                                           #PollIntervalMinSec=32                                                          #PollIntervalMaxSec=2048   

I checked timedatectl again, and now it is on, but still not using NTP. I understand that NTP is more precise, and that can be important in some situations. Not sure if virtualization with pci passthrough needs extremely precise time or not.

From other stuff I was reading, I thought maybe NTP was conflicting with timesyncd. So remove ntp for the time being:

sudo systemctl stop ntp  sudo apt-get purge ntp  

But after purging ntp, NTP showed as active!

admin@virtland:~$ timedatectl                 Local time: Fri 2020-07-10 09:34:52 EDT               Universal time: Fri 2020-07-10 13:34:52 UTC                     RTC time: Fri 2020-07-10 13:34:52                        Time zone: America/New_York (EDT, -0400)  System clock synchronized: yes                                          NTP service: active                                   RTC in local TZ: no       

Am I going crazy? Is NTP still here somehow? Nope.

admin@virtland:~$ sudo systemctl start ntp  Failed to start ntp.service: Unit ntp.service not found.  

Apologies for not asking a more focused question, but what the heck is going on here?

I am well and truly lost. Also, I will edit this post later and make a not as to whether removing NTP (and thus activating it?!) fixed the stability problems that led me down this rabbit hole.

Edit: The next thing I did was disable ntp on timesyncd and (re)install NTP as described here. https://www.digitalocean.com/community/tutorials/how-to-set-up-time-synchronization-on-ubuntu-18-04

That resulted in:

admin@virtland:~$ ntpq -p       remote           refid      st t when poll reach   delay   offset  jitter  ==============================================================================   0.us.pool.ntp.o .POOL.          16 p    -   64    0    0.000    0.000   0.000   1.us.pool.ntp.o .POOL.          16 p    -   64    0    0.000    0.000   0.000   2.us.pool.ntp.o .POOL.          16 p    -   64    0    0.000    0.000   0.000   3.us.pool.ntp.o .POOL.          16 p    -   64    0    0.000    0.000   0.000   0.ubuntu.pool.n .POOL.          16 p    -   64    0    0.000    0.000   0.000   1.ubuntu.pool.n .POOL.          16 p    -   64    0    0.000    0.000   0.000   2.ubuntu.pool.n .POOL.          16 p    -   64    0    0.000    0.000   0.000   3.ubuntu.pool.n .POOL.          16 p    -   64    0    0.000    0.000   0.000   ntp.ubuntu.com  .POOL.          16 p    -   64    0    0.000    0.000   0.000  admin@virtland:~$ timedatectl                 Local time: Fri 2020-07-10 10:35:39 EDT               Universal time: Fri 2020-07-10 14:35:39 UTC                     RTC time: Fri 2020-07-10 14:35:40                        Time zone: America/New_York (EDT, -0400)  System clock synchronized: no                                                                 NTP service: n/a                                                            RTC in local TZ: no                                                   admin@virtland:~$ systemctl status systemd-timesyncd   ● systemd-timesyncd.service                                                          Loaded: masked (Reason: Unit systemd-timesyncd.service is masked.)              Active: inactive (dead)                                                    admin@virtland:~$ nano /etc/ntp.conf  admin@virtland:~$ systemctl status systemd-timesyncd                             ● systemd-timesyncd.service                                                          Loaded: masked (Reason: Unit systemd-timesyncd.service is masked.)              Active: inactive (dead)                                                    admin@virtland:~$ ntpstat  unsynchronised                                                                     polling server every 8 s   

I reversed those changes as recommended my Michael Hampton: Does this mean it's working?

boss@virtland:~$ sudo systemctl stop ntp  Failed to stop ntp.service: Unit ntp.service not loaded.  boss@virtland:~$ sudo timedatectl set-ntp yes  boss@virtland:~$ sudo timedatectl set-ntp on  boss@virtland:~$ ntpq -p  bash: /usr/bin/ntpq: No such file or directory  boss@virtland:~$ systemctl status systemd-timesyncd   ● systemd-timesyncd.service - Network Time Synchronization       Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; >       Active: active (running) since Fri 2020-07-10 10:49:18 EDT; 50s ago         Docs: man:systemd-timesyncd.service(8)     Main PID: 108365 (systemd-timesyn)       Status: "Initial synchronization to time server 91.189.94.4:123 (ntp.ubu>        Tasks: 2 (limit: 154317)       Memory: 1.8M       CGroup: /system.slice/systemd-timesyncd.service               └─108365 /lib/systemd/systemd-timesyncd    Jul 10 10:49:17 virtland systemd[1]: Starting Network Time Synchronization...  Jul 10 10:49:18 virtland systemd[1]: Started Network Time Synchronization.  Jul 10 10:49:18 virtland systemd-timesyncd[108365]: Initial synchronization t>  lines 1-14/14 (END)  
 timedatectl                 Local time: Fri 2020-07-10 10:52:56 EDT               Universal time: Fri 2020-07-10 14:52:56 UTC                     RTC time: Fri 2020-07-10 14:52:56                        Time zone: America/New_York (EDT, -0400)  System clock synchronized: yes                                          NTP service: active                                   RTC in local TZ: no              

So I guess it is working. Since the crashes that took me down this path are still happening, I guess the time wasn't the issue.

forwarding proxmox vnc websocket with nginx

Posted: 06 May 2022 10:04 PM PDT

I installed nginx in order to be a lazy person and just go to proxmox.domain.com instead of proxmox.domain.com:8006, but now I can't access the VNC client when connected to the first address, although I can doing the ip+port. A friend of mine pointed out that I have to forward web sockets, so I hit the keyboard and googled it and found this. I tried everything in there, and it isn't working. I have restarted nginx and it said that the config file worked.

location / {              proxy_set_header X-Real-IP $remote_addr;              proxy_set_header Host $http_host;              proxy_pass https://localhost:8006;      }        location /websockify {              proxy_http_version 1.1;              proxy_pass http://127.0.0.1:6080;              proxy_set_header Upgrade $http_upgrade;              proxy_set_header Connection "upgrade";                # VNC connection timeout              proxy_read_timeout 3600s;                #disable cache              proxy_buffering off;      }        location /vncws/ {              proxy_pass http://127.0.0.1:6080;              proxy_http_version 1.1;              proxy_set_header Upgrade $http_upgrade;              proxy_set_header Connection "upgrade";        }  

This is the block of config in my /etc/nginx/sites-enabled/proxmox. What am I doing wrong?

failed to get D-Bus connection: Operation not permitted

Posted: 06 May 2022 11:54 PM PDT

I'm trying to list services on my CentOS image running in Docker using

systemctl list-units    

but I get this error message:

Failed to get D-Bus connection: Operation not permitted  

Any suggestions what the problem might be?

SCCM Device Collection Membership based on Machine Variable

Posted: 07 May 2022 01:08 AM PDT

I'm not sure if this is quite possible but I'm struggling with writing the WQL query statement that would allow me to have SCCM device collections populate based on a machine variable.

Example: Device named "TestVM-01" has a machine variable named "PatchGroup" with a value of "Hour1". I would like the device collection called "Hour1" to dynamically populate any devices with the PatchGroup variable set to Hour1.

I first struggled with just querying the device variables via powershell and WMI since the SMS_MachineVarible class is a lazy property of SMS_MachineSettings so you have to call the objects by their full path.

In Powershell/WMI I can query it with something like this

(([wmi]"\\SCCM-LAB\root\sms\site_001:SMS_Machinesettings.ResourceID=11111111").machinevariables | where name -eq "PatchGroup").value  

If you query SMS_MachineSettings without specifying the full path of the object, it will return the MachineVariables attribute as empty

Would anyone be able to tell me how I would write the WQL for that to pull those list of objects from the SMS_Resource class "where PatchGroup = x"?

ERR_CONNECTION_TIMED_OUT (unless I'm using a proxy)

Posted: 06 May 2022 11:07 PM PDT

I run my own online business as well as managing over a dozen self hosted sites for other people using the wordpress.org. platform. They're all hosted by a small company in the UK and if I do experience any problems the company are usually quick to sort them out. However...

Right now, using Chrome or Safari (on an iMac and on a PC) I'm getting the message ERR_CONNECTION_TIMED_OUT when attempting to login to the wp-admin; or even if I just want to view the sites. It's not the first time this has happened, and I've done all the usual things - cleared the browser cache, double checked the wi-fi connection, used a 'is it down or is it just me' site etc. etc. Btw, the sites are accessible from elsewhere (but this doesn't help me, I live and work out in the sticks.) I've done pings and traceroutes and copied my hosting provider into these (no reply, yet.)

I can access the sites using a proxy (e.g. anonymouse) but can't edit them in this way of course. Anyway, this wouldn't be a great solution, I want to be able to use Chrome or Safari. Anyone any ideas?

if secondary dns server is down ubuntu can not resolve

Posted: 06 May 2022 10:04 PM PDT

I am experimenting with two local DNS server. When I take down the second (or the primary) dns server, I can not resolve any domain name.

Using host command or nslookup I get time out error :

root@ubuntu:~# host testsrv.lan  ;; connection timed out; no servers could be reached  root@ubuntu:~# nslookup testsrv.lan  ;; Got recursion not available from 10.0.3.4, trying next server  ;; connection timed out; no servers could be reached  

But when I try dig command I get a correct answer :

root@ubuntu:~# dig testsrv.lan     ; <<>> DiG 9.9.5-3ubuntu0.2-Ubuntu <<>> testsrv.lan  ;; global options: +cmd  ;; Got answer:  ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 7759  ;; flags: qr rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0  ;; WARNING: recursion requested but not available    ;; QUESTION SECTION:  ;testsrv.lan.           IN  A    ;; ANSWER SECTION:  testsrv.lan.        5   IN  A   10.0.3.4    ;; Query time: 2 msec  ;; SERVER: 10.0.3.4#53(10.0.3.4)  ;; WHEN: Thu Jun 04 17:54:28 CET 2015  ;; MSG SIZE  rcvd: 56  

(primary DNS server is 10.0.3.4 and I have added an A recorde : testsrv.lan --> 10.0.3.4)

I have used tcpdump to check what is happening under the hood : tcpdump -vvv -l -n -i any "udp port 53" I have noticed that the first server is responding correctly to the dns request from my host but the host is always trying to request the second server and timing out.

Isn't ubuntu (specifically resolvconf service) supposed to be "fault tolerant" when any of the two DNS servers is down ? is this the default behavior when resolving a domain name ? is it docummented any where ? can we change ?

N.B: I am using ubuntu 14.04 server and the DNS is configured using /etc/network/interface dns-nameservers 10.0.3.4 10.0.3.5

Any help is appreciated. Thank you.

Debian: How to create SOCKS proxy server to exit on specific network interface?

Posted: 06 May 2022 11:07 PM PDT

I have a setup with two internet connections.

  • eth0 - Internet connection 1
  • eth1 - Internet connection 2

How can I create a SOCKS 4/5 server that will take connections coming from eth0 and proxy the traffic through eth1 ?

I saw that you can use ssh to create a simple SOCKS proxy, but I am unable to proxy the traffic through eth1.

I also tried Dante, but with no success.

IIS: acess denied to Web.Config file

Posted: 07 May 2022 12:03 AM PDT

I'm trying to set up a new website in a Windows Server 2003. In this server there is already a website, classic ASP, in port 80. I'm configuring this new one (ASP.NET 3.5) in port 82 with, actually, .NET Framework 4.0, as I keep getting an error when trying to install 3.5.

When accesing the website I get an error saying access to web.config file is denied, if I access a test html file it loads ok.

I also tryed adding an impersonate clause in web.config, for the machine admin user, but no success.

Folder and files have correct permissions for IUSR_SERVERNAME, web server extensions are active and have permissions also (the .NET framework ones). User ASP.NET does not exist in this machine (I read somewhere you also need to give access to this user) so I don't know what else to try.

Help please. Thank you

RDCMAN Remote Desktop Connection Manager doesn't allow all clicks or clicking

Posted: 07 May 2022 12:03 AM PDT

I'm using RDCMAN 2.2 from WIN7 x64 to WIN7 x64.

I can login fine to remote boxes and see the remote desktop, and see the mouse move, but!, I cannot click on everything I see IExplorer highlight as I move the mouse over it, but I cannot click it. Even stranger, I can successfully click the icon next to IExplorer, Media Player.

I also cannot click the Windows Start button.

I do not have any of these problems when I use the 'remote desktop' program itself.

I would guess this is a security issue.

iotop fields - What does 'TID' mean in iotop?

Posted: 06 May 2022 11:58 PM PDT

What does TID mean in iotop?

Total DISK READ: 0.00 B/s | Total DISK WRITE: 3.90 M/s    TID  PRIO  USER     DISK READ  DISK WRITE  SWAPIN     IO>    COMMAND   2150 be/4 root        0.00 B/s    0.00 B/s  0.00 % 65.25 % [flush-202:0]   6694 be/4 www-data    0.00 B/s   19.64 K/s  0.00 %  0.00 % php-fpm: pool www   6700 be/4 www-data    0.00 B/s   23.56 K/s  0.00 %  0.00 % php-fpm: pool www   8646 be/4 www-data    0.00 B/s  424.12 K/s  0.00 %  0.00 % php-fpm: pool www  10974 be/4 www-data    0.00 B/s   19.64 K/s  0.00 %  0.00 % php-fpm: pool www  

thanks.

Where in the US is the best geographic location to host servers for the UK/Europe market?

Posted: 06 May 2022 09:57 PM PDT

We would like to keep our hosting in the US.

But for European traffic, where is the best location for ping/response times? (East Coast, West Coast, Central etc)

No comments:

Post a Comment