Sunday, November 7, 2021

Recent Questions - Server Fault

Recent Questions - Server Fault


"failed to setup loop device" when mounting an Ubuntu iso image on Ubuntu

Posted: 07 Nov 2021 10:22 PM PST

I downloaded an iso image of Ubuntu server, ubuntu-20.04.3-live-server-amd64.iso. Then I tried to mount this iso as a folder, using the usual command

mount ubuntu-20.04.3-live-server-amd64.iso ./iso  

, but I received error:

mount: ./iso: failed to setup loop device for .../ubuntu-20.04.3-live-server-amd64.iso.  

The same error shows up if I add -o loop or -t iso9660. I checked the thread here but the answer (lsmod | grep loop, modprobe loop) does not work for me. How to solve this problem? I'm working on Ubuntu 20.10.

PS, must I have super user privilege to mount an iso file? I saw many questions mount iso with sudo but experiencing the same error. How to check if I have permissions to use /dev/loop0?

How can I search for symlinks that point to other symlinks?

Posted: 07 Nov 2021 09:11 PM PST

I have morass of chained symlinks like this scattered around:

A (symlink) -> B (symlink) -> C (file)

Some may even involve longer chains, I'm not sure yet.

When manually examining a single file it's easy to see what's going on but I'm looking for an automated way to find these and (preferably) clean them up.

In the example above I would want the B->C symlink to stay in place but the A->B symlink would become A->C

Identifying them if the main goal; the quantity might end up being sufficiently low that fixing them manually wouldn't be a big deal. But finding them all manually isn't feasible.

I'm aware of the "symlinks" utility and it is useful in many scenarios such as finding dangling symlinks, but it doesn't seem to have the ability to detect "chains" like this

I'm aware of "find -type l" as well as the -L option for find but they don't seem to work together as "find -L -type l" always returns an empty response.

I make use of "find -type l -exec ls -l {} +" (which works better than find's -ls option) and tried to use it to obtain a list of symlink destinations which I could then check to see if they're symlinks or not, however, all the symlinks are relative rather than absolute so the output is a bit messy

for example I get outputs like this:

lrwxrwxrwx 1 username 50 Nov  5 01:00  ./a/b/c/d -> ../../e/f/g/h  

From that I have to think a bit to figure out that the actual symlink destination is ./a/e/f/g/h (and I can then check to see if it's a symlink or not), not really ideal for automation

If the symlinks were (temporarily) absolute rather than relative this would be easier but the "symlinks" utility can only convert absolute -> relative; as far as I can tell it can't convert relative -> absolute.

Forward DNS to specific ports to access docker container

Posted: 07 Nov 2021 09:38 PM PST

I have 1 domain and 1 subdomain:

example.com.au  api.example.com.au  

I have 2 docker containers running 2 different applications, a frontend website and an API. These containers are accessible over 8080 (frontend) and 3000 (backend).

Both domains are on an ELB over HTTPS and I have setup IP routing to forward traffic from http and https to port 8080 so the frontend web app is loading fine but the webapp needs to access the API through a different domain (subdomain) however I am completely lost on how to get api.example.com.au to load data from the API on port 3000.

I thought perhaps an apache container accepting all traffic from example.com.au and api.example.com.au and then proxypass to the appropriate containers over the different ports but also unsure how to achieve this based on some examples I found...or even if this is the best approach.

Exim established many connections to strange ips

Posted: 07 Nov 2021 08:37 PM PST

I installed VestaCP and used their mail server for my domain mails. But when I run netstat on my server,it shows some strange connections. There are no problems with my mail server until now, I just worry about these connections.

Does my server meet any security problems?

# netstat -antp    Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name  tcp        0      0 0.0.0.0:25              0.0.0.0:*               LISTEN      5033/exim  tcp        0      0 0.0.0.0:2525            0.0.0.0:*               LISTEN      5033/exim  tcp        0     35 my.server.ip.address:25        87.246.7.228:63258      ESTABLISHED 13152/exim  tcp        0     35 my.server.ip.address:25        212.70.149.88:38064     ESTABLISHED 13518/exim  tcp        0      0 my.server.ip.address:25        212.70.149.88:20194     ESTABLISHED 13519/exim  

nginx Proxy Manager Custom Locations

Posted: 07 Nov 2021 09:45 PM PST

Ultimately I intend to configure nginx to proxy content from web services on different hosts. As currently set up I'm using nginx Proxy Manager with nginx in Docker containers. To exclude the complexities of web service setup from the issues of configuring the reverse proxy, I have set up web servers with static content.

  • I have Apache in a container on the same docker network as the nginx container.
  • I have IIS on my workstation.

As you can see from this grab I have nginx successfully set up to proxy my workstation's IIS (not to mention a public DNS entry for my external interface. That's working fine.

IIS proxied by nginx

These grabs show that the Apache container maps 80 to 8080 on the docker host which is imaginatively named dockerhost, and the browser on my workstation can access both the root document and another document by name.

apache root doc apache more doc

At this point I altered the nginx proxy host definition to define a custom location. Within the docker network Apache is on port 80; this is why I've specified 80 rather than 8080.

nginx proxy host definition custom location

This appears to work.

Browser loads apache root

...until you try to load some other resource from Apache but get the same content.

Trying to load something other than the root document

It appears that absolutely anything beginning with apache/ is mapped to the root document.

anycrap

At this point I went back and looked for documentation but failed to find anything relevant.

Swapping things around so that nginx proxies IIS and the custom location iis points at IIS on my workstation exhibits exactly the same problem, this time with IIS.

How should this configuration be expressed?

A Proxy Manager based answer is preferable, I have quite a bit to learn before I can use instructions on hacking the nginx config directly.

That said, for diagnostic use, here's the generated config.

# ------------------------------------------------------------  # wone.pdconsec.net  # ------------------------------------------------------------  server {    set $forward_scheme http;    set $server         "pnuc.lan";    set $port           80;      listen 80;    listen [::]:80;    listen 443 ssl http2;    listen [::]:443 ssl http2;      server_name wone.pdconsec.net;      # Let's Encrypt SSL    include conf.d/include/letsencrypt-acme-challenge.conf;    include conf.d/include/ssl-ciphers.conf;    ssl_certificate /etc/letsencrypt/live/npm-1/fullchain.pem;    ssl_certificate_key /etc/letsencrypt/live/npm-1/privkey.pem;      access_log /data/logs/proxy-host-1_access.log proxy;    error_log /data/logs/proxy-host-1_error.log warn;      location /apache {      set              $upstream http://apache:80/;      proxy_set_header Host $host;      proxy_set_header X-Forwarded-Scheme $scheme;      proxy_set_header X-Forwarded-Proto  $scheme;      proxy_set_header X-Forwarded-For    $remote_addr;      proxy_set_header X-Real-IP      $remote_addr;      proxy_pass       $upstream;      }      location / {      # Proxy!      include conf.d/include/proxy.conf;    }      # Custom    include /data/nginx/custom/server_proxy[.]conf;  }  

QEMU freezes at "Booting from Hard Disk..." with -nographic

Posted: 07 Nov 2021 09:33 PM PST

A QEMU VM, both the host and guest OS being Ubuntu 20.04. QEMU 6.1.0 is compiled without any special parameters. The guest was installed from a downloaded iso image of Ubuntu server.

If I start the VM using

qemu-system-x86_64 -hda ubuntu.qcow -m 4000  

, QEMU starts a VNC server and I can view in VNC Viewer that the guest Ubuntu OS is running properly.

But if I start the VM using

qemu-system-x86_64 -hda ubuntu.qcow -m 4000 -nographic  

, QEMU prints out the following and freezes.

SeaBIOS (version rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org)  iPXE (http://ipxe.org) 00:03.0 CA00 PCI2.10 PnP PMM+BFF8F290+BFEEF290 CA00  Booting from Hard Disk...  

I can see from top that CPU is 100% busy with qemu-system-x86 at first and turns back to idle after a while. I guess the guest OS has finished booting successfully, but I can see nothing on the screen. What I want is that the guest can take over the console of the host and output to it. I did not find -console parameter, so I guess -nographic would do the job. Did I choose a wrong parameter? If so, how can I see the display of booting procedure and the login prompt of the guest Ubuntu? Thanks.

Bring Cluster On-Line with only One Node

Posted: 07 Nov 2021 05:54 PM PST

I would like to set a policy such that my Failover Cluster will always come into service, even if only one (of the two nodes) is available.

Background: I have only two nodes in the cluster, plus a witness quorum in a share on the DC. For this question assume that the DC stays in-service. (Windows Server 2019).

If I shutdown node1, then node2 will be active. If I then shut down node2, then cluster will be stopped (obviously), however, if I then start only node1, the cluster will never recover. Not only will it not recover, without node2, but I don't see an easy way to make the cluster come into service with the cluster manager. The only way I can recover the cluster, in this scenario, would be to start node2, however, that does not seem (to me) to be real high-availability. IMO I should be able to set a policy or have a reasonably easy way to bring the cluster back on-line (perhaps after a waiting period), even if node2 never recovers.

Am I just thinking about this the wrong way or missing something obvious?

Cisco asr 1001-hx

Posted: 07 Nov 2021 05:32 PM PST

Can't seem to figure out what the wan throughput of a cisco asr 1001-hx is?
Also, I'm no cisco expert but, wanted to know if you guys have a go to cheat sheet/website that lists all the throughputs of available cisco routers?
Thanks

Apache Mod_RemoteIP Support

Posted: 07 Nov 2021 05:26 PM PST

I currently have an old Apache 2.4.18 server on Ubuntu 16.

Does this version of Apache support mod_remoteip?

I looked in the mod_remoteip documentation, but I haven't found about the minimal version of Apache that it is supported.

Degraded RAIDZ2 but cannot find the failed disk

Posted: 07 Nov 2021 05:05 PM PST

I'm not too experienced with raid and ZFS things, and here's what I encounter.

A raidz2 array was showing degraded status. One disk states unavail and says that it was /dev/da3p2. Strangely enough, another disk detected as da3 is showing online and having no issue. So, my first question is: How is this possible? da3p2 is a partition of da3. How can the disk (da3) be part of the array, instead of the partition (da3p2)?

So I assumed that I need to replace da3. Was I wrong here?

Before disk replacement:

before

After replacing the disk, da3 is now become da17p2 yet the array is still degraded showing da3p2 OFFLINE.

da3p2 offline

zpool status:

root@thamar[~]# zpool status -v Volume1    pool: Volume1   state: DEGRADED  status: One or more devices has experienced an error resulting in data          corruption.  Applications may be affected.  action: Restore the file in question if possible.  Otherwise restore the          entire pool from backup.     see: http://illumos.org/msg/ZFS-8000-8A    scan: resilvered 3.60T in 0 days 12:53:36 with 1 errors on Sun Oct 31 08:22:17 2021  config:            NAME                                            STATE     READ WRITE CKSUM          Volume1                                         DEGRADED     0     0     2            raidz2-0                                      DEGRADED     0     0     4              da1p2                                       ONLINE       0     0     0              gptid/9290c3ba-1bfb-11ec-ac1d-002590d9d3ba  ONLINE       0     0     0              da9p2                                       ONLINE       0     0     0              da2p2                                       ONLINE       0     0     0              7659882188769969497                         OFFLINE      0     0     0  was /dev/da3p2              da15p2                                      ONLINE       2     0     0  

I am also unable to find any info on that 7659882188769969497 disk. So, how do I find it and get it replaced?

This issue lasts for weeks now, no one can figure out that mysterious da3p2.

Realtek data corruption issue

Posted: 07 Nov 2021 04:13 PM PST

I have just replaced an old PCI card in my PC for PCI Express Tp-link TG-3468 based on Realtek chipset (02:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 15) ). It works... kind of. It seems there are some data corruption issues as in Firefox I get SSL_ERROR_BAD_MAC_ALERT or other SSL errors I didn't get with my old card. I replaced all cables connecting the card to router - no change, errors are still there. I tried built-in Fedora drivers on kernel 5.14.16 and also drivers downloaded from Realtek site. No change. I turned off all card features using -K option of ethtool. No change. I tried forcing 100Mb/s mode. No change.

Do you have any other ideas how I can confirm that there is indeed data corruption issue and how to debug/fix it?

How to authorize only IP from a Fargate ECS service for MongoDB Atlas Cluster

Posted: 07 Nov 2021 05:31 PM PST

I have an ECS Fargate service mapped to an Application Load Balancer on AWS. In this service, there are several task that are frequently killed and restart. These tasks should be able to connect to a MongoDB Atlas cluster.

Which IP should I whitelist for my Atlas cluster? Is it possible to have an elastic IP or a range of IPs for my service to allow only IP(s) of my service in my Mongo Atlas cluster?

Sorry if this question is trivial, I'm struggle a bit on ECS, ALB and networking on AWS.

How to sync GCP Cloud Storage Bucket metadata to a database?

Posted: 07 Nov 2021 07:56 PM PST

I have a large number of objects, currently around 1 million, stored in a GCP Cloud Storage Bucket. Objects are added at a rate of 1-2 thousand per day. I would like to efficiently run queries to look up objects in the bucket based on the metadata for those objects, including file name infix/suffix, date created, storage class, and so forth.

The Cloud Storage API allows searching by filename prefix (docs), but the callback takes several seconds to complete. I can do infix queries with gsutil, like gsutil ls gs://my-bucket/foo-*-bar.txt, but this is even slower. Additionally, these queries are considered Class A operations, which incur costs.

Rather than dealing with the Cloud Storage API for searching my bucket, I was thinking I could add a listing of all objects in my bucket to a database such as Bigtable or SQL. The database should stay in sync with all changes to the bucket, at least when objects are created or deleted, and ideally when modified, storage class changed, etc.

What is the best way to achieve this?

App Version upgrades to all running instances in Google Instance Groups

Posted: 07 Nov 2021 06:33 PM PST

We have micro service architecture based application. This is my first microservice application and my basic need is auto scale. We have decided to deploy it on google cloud Compute Engine Managed Instance Group, that has health check ups, auto scale and autoheal properties, and thus suits our current needs well. I want to deploy each service on their instance groups and use load balancer to transfer traffic to services.

At this stage I do not want spend time on Dockers and Kubernetes for MVP roll out, as that will be new learning for me and the team and will delay the launch. In short DevOps will be stage 2 for us, as each service war takes just few seconds to deploy on server and that is good for now.

My question is about how to deploy new versions automatically to all the instances. If I have 10 instances in each instance group, will I need to deploy war one by one to each instance or there is auto-deployment of app code?

I did web research but could not find any connected answers in tis context. There is auto deployment of configurations changes only for underlying Instance Template. However no document speaks that if I have update war with latest version of application code to the instance template, it will auto update all connected instance (like the configuration changes).

Thanks for your help.

Can we use make tar multi-threaded without any sort of compression

Posted: 07 Nov 2021 04:19 PM PST

I have to backup 500GB using the traditional tar -cvf command (without any compressions) However tar is adding each file one by one and this way it's taking a very long time

Is there any way to make tar multi threaded (like it can add multiple files to the tar archive at once)

Iptables: how to allow forwarding from wireguard NIC only to some IP

Posted: 07 Nov 2021 04:44 PM PST

Context

I successfully integrated Wireguard in my LAN so I could access my NAS (192.168.1.45) from the outside.

|Router|     ===:5182=> |VPN server|        ====> |NAS|  192.168.1.254           192.168.1.21 (wlan0)      192.168.1.45                          10.10.10.1 (wg0)  

Packets forwarding through my VPN server relies on:

  1. ip forwarding in /etc/sysctl.conf (Cf my script)
  2. the following rules added (-A) when wireguard interface (wg0) is up.
PostUp = iptables -A FORWARD -i wg0 -j ACCEPT; iptables -t nat -A POSTROUTING -o $main_nic -j MASQUERADE; ip6tables -A FORWARD -i wg0 -j ACCEPT; ip6tables -t nat -A POSTROUTING -o $main_nic -j MASQUERADE  

(this is the command wireguard execute when I stop wg0)

PostDown = iptables -D FORWARD -i wg0 -j ACCEPT; iptables -t nat -D POSTROUTING -o $main_nic -j MASQUERADE; ip6tables -D FORWARD -i wg0 -j ACCEPT; ip6tables -t nat -D POSTROUTING -o $main_nic -j MASQUERADE  

Need

This works like a charm but how could I restrict things so a client entering my LAN trough this VPN entrypoint could only access 192.168.1.45 and no other IP? Is it compatible with ip forwarding?

Ideally, if this could be entirely managed in the PostUp PostDown wireguard's directives (independently of the previous rules on the system), this would be amazing . Tried some but, let's face it, I am more of a developer than a network administrator

winhttp proxy settings not being picked up by system account

Posted: 07 Nov 2021 06:15 PM PST

We are in the middle of setting up Exchange Hybrid with new 2016 Exchange servers in their own network segment which the HCW was run and they have access to the Microsoft 0365 endpoints only through our firewall and are our existing Exchange 2010 mailbox servers which are on a different network segment and have never been internet facing.

Mail flow, Legacy public folders and Free/busy lookups are working as correctly from EXO to these entities hosted on Exchange the 2010 servers.

To get free/busy lookups working from EXO to users on Exchange 2010 we implemented the following using an elevated command prompt on our Exchange 2010 servers, to point to a proxy that has been set up to route to the Microsoft 0365 endpoints

netsh winhttp set proxy proxy-server="172.22.90.102:80"

bypass-list="localhost;127.0.0.1;*.dom.com;exe10sever01;exe10server02;ex16server01;exc16server02"

The issue we are facing is that Exchange 2010 users cannot successfully lookup free/busy information for EXO users, despite the proxy being in place.

When testing using PsExec -s -i to launch Internet Explorer on the 2010 servers, with just Detect Settings selected in Internet Options/Connections/LAN settings

I do not see any traffic to our proxy being recorded in Wireshark

I'm unable to connect to specific microsoft urls such as https://nexus.microsoftonline-p.com/federationmetadata/2006-12/federationmetadata.xml, which just times out.

However if I launch IE again with PsExec and set the proxy details directly into IE I see the traffic being directed to the proxy server and the urls open.

Disabling Antivirus, firewalls etc on the Exchange servers make no difference to the outcome, is there some registry setting or something that I'm missing that is stopping the system account from using the proxy settings for Exchange?

Windows Server 2019 accessing shares over VPN for some users

Posted: 07 Nov 2021 09:27 PM PST

I'm using Windows Server 2019 and have setup a OpenVPN 2.5.4 Client as a service to start a VPN link back to my pfSense box at another location. The problem is only some Windows Server users can access the Samba shares on a Debian machine on the other side of the VPN.

If I login as user A, they can see the mapped drives at \192.168.0.4 but user B cannot. User B can however ping the 192.168.0.4 machine.

The VPN is starting up on boot of the Windows Server and connecting.

It works for user A and the Administrator. The error message I get for user B when trying to access \192.168.0.4 in the explorer is;

  Windows cannot access \\192.168.0.4\data    Check the spelling of the name. Otherwise, there might be a problem with your network. To    try to identify and resolve network problems, click Diagnose.  

I've tried setting user B as an Administrator, rebooting and logging in again but that didn't work. It feels like a permissions problem. The first time I tried accessing a Samba share as User A it prompted me for credentials, but it doesn't do this for User B.

Sonicwall Global VPN user either can't reach internet, or LAN depending on Access List

Posted: 07 Nov 2021 10:01 PM PST

I have a Sonicwall running firmware 6.5.4.4-44n and have a standard VPN (not SSL-VPN) setup which I'm connecting to via the Global VPN Client for Windows. The WAN Group VPN is setup to be a "Split Tunnel" and I have both "Set Default Gateway as this Gateway" and "Apply VPN Control List" NOT checked (checking either doesn't seem to make a difference in the behavior)

What I would like to accomplish is users connected to the VPN can access the "X0 Subnet" (which is an Object defined as 10.0.0.0/255.255.255.0) through the VPN and the rest of the internet via their own external connection (NOT route internet traffic through the VPN).

That I've found is my users can either:

  1. Access the internet, but not the LAN if I set the user "VPN Access" to be "X0 Subnet" and nothing else
  2. Access the LAN, but not the internet if I set the user "VPN Access" to "WAN RemoteAccess Networks" (which is defined as 0.0.0.0/0.0.0.0

Perhaps I'm missing what "VPN Access" means, but this seems like the opposite behavior as what I would expect. (Giving "X0 Subnet" access results in the user not being able to access the "X0 Subnet"). I've been trying different configurations and following various internet posts for the past 2 days without making any progress. Does anyone have an idea of what is going on here?

With "LAN Networks" in the access list, here is my client route map. My (non VPN client network is 10.0.2.0/24. The remote network I'm trying to access is 10.0.0.0/24, which is in the "LAN Subnets" list)

route print  ===========================================================================  Interface List    7...00 60 73 0e 22 ad ......SonicWALL Virtual NIC    5...08 00 27 be f3 85 ......Intel(R) PRO/1000 MT Desktop Adapter    1...........................Software Loopback Interface 1  ===========================================================================    IPv4 Route Table  ===========================================================================  Active Routes:  Network Destination        Netmask          Gateway       Interface  Metric            0.0.0.0          0.0.0.0         10.0.2.2        10.0.2.15     25           10.0.0.0    255.255.255.0         On-link        10.0.0.213    257         10.0.0.213  255.255.255.255         On-link        10.0.0.213    257         10.0.0.255  255.255.255.255         On-link        10.0.0.213    257           10.0.2.0    255.255.255.0         On-link         10.0.2.15    281          10.0.2.15  255.255.255.255         On-link         10.0.2.15    281         10.0.2.255  255.255.255.255         On-link         10.0.2.15    281      33.33.171.50  255.255.255.255         10.0.2.2        10.0.2.15     25      33.33.171.50  255.255.255.255         On-link        10.0.0.213      2          127.0.0.0        255.0.0.0         On-link         127.0.0.1    331          127.0.0.1  255.255.255.255         On-link         127.0.0.1    331    127.255.255.255  255.255.255.255         On-link         127.0.0.1    331          224.0.0.0        240.0.0.0         On-link         127.0.0.1    331          224.0.0.0        240.0.0.0         On-link         10.0.2.15    281          224.0.0.0        240.0.0.0         On-link        10.0.0.213    257    255.255.255.255  255.255.255.255         On-link         127.0.0.1    331    255.255.255.255  255.255.255.255         On-link         10.0.2.15    281    255.255.255.255  255.255.255.255         On-link        10.0.0.213    257  ===========================================================================  Persistent Routes:    None    IPv6 Route Table  ===========================================================================  Active Routes:   If Metric Network Destination      Gateway    1    331 ::1/128                  On-link    5    281 fe80::/64                On-link    7    281 fe80::/64                On-link    7    281 fe80::6520:9f25:dd7:33ee/128                                      On-link    5    281 fe80::bd8b:6045:f79a:1ff9/128                                      On-link    1    331 ff00::/8                 On-link    5    281 ff00::/8                 On-link    7    281 ff00::/8                 On-link  ===========================================================================  Persistent Routes:    None  

Thanks in advance

Elasticsearch connection refused

Posted: 07 Nov 2021 05:02 PM PST

I have just installed ElasticSearch 7.1.1 on Debian 9 throw apt-get repository VPS 4GB ram .. 1vcpu

service elasticsearch status     elasticsearch.service - Elasticsearch     Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: enabled)     Active: failed (Result: exit-code) since Tue 2019-06-04 16:53:25 CEST; 4min 53s ago       Docs: http://www.elastic.co    Process: 3161 ExecStart=/usr/share/elasticsearch/bin/elasticsearch -p ${PID_DIR}/elasticsearch.pid --quiet (code=exited, status=78)   Main PID: 3161 (code=exited, status=78)    Jun 04 16:53:11 MONITOR-BACKUP systemd[1]: Started Elasticsearch.  Jun 04 16:53:11 MONITOR-BACKUP elasticsearch[3161]: OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.  Jun 04 16:53:25 MONITOR-BACKUP systemd[1]: elasticsearch.service: Main process exited, code=exited, status=78/n/a  Jun 04 16:53:25 MONITOR-BACKUP systemd[1]: elasticsearch.service: Unit entered failed state.  Jun 04 16:53:25 MONITOR-BACKUP systemd[1]: elasticsearch.service: Failed with result 'exit-code'.  

test curl

curl -X GET http://159.69.195.123:9200/  curl: (7) Failed to connect to 159.69.195.123 port 9200: Connection refused  

enviroment vars

$PATH  -bash: /usr/share/elasticsearch/jdk/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin: No such file or directory    $JAVA_HOME  -bash: /usr/share/elasticsearch/jdk: Is a directory  

yellow exclamation mark on the network connection

Posted: 07 Nov 2021 07:02 PM PST

Ok here is the scenario. A couple of days ago our firewall (a domain joined TMG also configured for NAT and VPN gateway) server got compromised. As a result, it was taken down immediately and replaced the NAT gateway with a small router(temporarily) until a suitable device is arranged.

DCHP service is running on a DC and is leasing addresses ok. However, the servers on the network now have a yellow exclamation mark on the network connection indicating the network connection as unauthenticated and network profile on the servers is now set to public. When changing the network profile to domain it goes back to public automatically This is causing multiple issues on the network due to the

The servers are able to contact DNS, DHCP server, and internet

Servers are also able to contact the domain controllers Symantec SEP is used as a firewall on the servers.

Any ideas what could be causing this problem.?

What is the difference between aclinherit and aclmode?

Posted: 07 Nov 2021 06:05 PM PST

ZFS filesystems can have the aclinherit and aclmode properties set on them to control how inheritable ACL entries interact with object creation and Unix-style permissions operations.

Unfortunately, the official documentation is a bit cryptic/ambiguous as to exactly what the difference is between these two properties in terms of their role in computing ACLs. To illustrate, take these excerpts from Securing Files and Verifying File Integrity in Oracle® Solaris 11.3, emphasis mine:

aclinherit – Determine the behavior of ACL inheritance...

and:

aclmode – Modifies ACL behavior when a file is initially created or controls how an ACL is modified during a chmod operation...

This is really confusing, because ACL inheritance is going to occur or not occur when a file is initially created!

As for chmod, the above language and some of the examples suggest that its behaviour is governed by aclmode, but there is also an example on p.45 that shows it being governed by aclinherit.

I have a feeling this is also complicated by variables in the APIs used to create files. (I am familiar with the Windows APIs but not *nix ones.)

I feel like even after reading through the documentation I have a rather incomplete picture of how these properties work.

What exactly is the difference between the two? They seem to have some overlap, so what governs which is applied? What if they contradict?

Exim not Listening on port 465 or 587 for TLS connection

Posted: 07 Nov 2021 08:09 PM PST

I am configuring Exim on a Ubuntu server to send and receive mails via TLS.

Followed many guides which shows on how to configure Exim with TLS but still my Exim doesn't listen on 465 or 587

Exim only listen's on port 25 and I am able to send an receive mails

This is the official guide that I followed: https://help.ubuntu.com/community/Exim4

But still no luck, also I cannot find any reference in the config files which indicates on which ports is exim listening

I have also allowed the ports 465 and 587 via ufw using the command:

ufw allow 465  ufw allow 465  

but still Exim won't listen on 465 or 587, can anybody help me on why this is happening or is there are steps that I am missing

Getting error 1219 while there are no other sessions

Posted: 07 Nov 2021 07:02 PM PST

PC's in our organisation run Windows 10 Pro and are sometimes shared between users (local accounts, no domain and AD).

I have written a batch script that users execute when mounting our network shares to a drive letter. Most of the time it runs fine, but seemingly randomly it returns error 1219.

The first part of the script clears the network shares before mounting them again (so another user can logon).

NET USE * /delete /y >NUL: 2>&1  

This works fine and afterwards the net use command tells me there are no more connections.

I ran into the problem of cached user credentials a while ago so I decided to add the following lines to remove the stored credentials as well.

CMDKEY /delete:Domain:target=%ipaddr% >NUL 2>&1  CMDKEY /delete:LegacyGeneric:target=%ipaddr% >NUL 2>&1  

This also works fine and removes the credentials that windows stores for our fileserver.

The last part of the scripts mounts the network shares using the credentials the user provided.

NET USE H: \\%ipaddr%\home /user:srv002\%username% %password% /P:Yes  NET USE P: \\%ipaddr%\Privacy /user:srv002\%username% %password% /P:Yes >NUL 2>&1  NET USE M: \\%ipaddr%\Marketing /user:srv002\%username% %password% /P:Yes >NUL 2>&1  

These last lines return the error code 1219 from time to time telling me that there should not be multiple sessions using different credentials to the same server. A reboot or manually adding the shares usually works in this case.

I think I must be missing something but after some research the only solution given is to execute NET USE * /delete /y which I already am.

Let's Encrypt certbot validation over HTTPS

Posted: 07 Nov 2021 07:54 PM PST

Update: The original SNI challenge type has been disabled. There is a new more secure SNI challenge type with limited server support. SNI is not likely a suitable option for small sites.

I have configured HTTP to allow /.well-known/ over HTTP and refuse or redirect all other requests. All domains are configured to use the same file system directory. The server adds a 'Strict-Transport-Security' header to get supporting browsers to cache the upgrade requirement. DNS CAA records limit the CAs that can provide certificates.

Original response: From the docs of the Certbot webroot plugin

The webroot plugin works by creating a temporary file for each of your requested domains in ${webroot-path}/.well-known/acme-challenge. Then the Let's Encrypt validation server makes HTTP requests to validate that the DNS for each requested domain resolves to the server running certbot.

On a privately used home server, I have port 80 disabled, that is, no port-forwarding is enabled in the router. I have no intention of opening up that port.

How can I tell certbot that the validation server should not make a HTTP request, but a HTTPS (port 443) request, to validate the ownership of the domain?

The validation server shouldn't even have the need to validate the certificate of the home server, as it already uses HTTP by default. I might have a self-signed certificate, or the certificate that is up for renewal, but that should not matter.

Currently I am in the situation where I need to enable port 80 forwarding as well as a server on it in order to create / renew certificates. This doesn't let me use a cronjob to renew the certificate. Well, with enough work it would, but I already have a server listening on 443, which could do the job as well.

OpenVPN tls-verify with batch script

Posted: 07 Nov 2021 09:35 PM PST

I want to execute a batch script to verify if the common name of the user is present in some TXT file, if yes, authorize the connection, otherwise deny. My server.ovpn is:

local IPADDRESS  mode server  port 1194  dev tun  dev-node VPNInterface  server 10.1.1.0 255.255.255.0  push "redirect-gateway def1"  push "dhcp-option DNS 8.8.8.8"  comp-lzo  tls-server  tls-auth keys/shared.key 0  ca keys/ca.crt  cert keys/server.crt  key keys/server.key  dh keys/dh1024.pem  keepalive 10 60  tls-verify "C:\verify.bat"  

And my verify.bat script is:

@echo off  echo "%1 %2" > C:\log.txt  setlocal enableextensions enabledelayedexpansion  for /f "tokens=*" %%a in (C:\CN_List.txt) do (     set tst=%%a    set tst=!tst:%2=!    if not !tst!==%%a (       exit /b 0    ) else (      exit /b 1    )  )  

I did that echo in log.txt to see if I get the certificate depth and the X509 common name inside the file, but doesn't appear nothing. And in the OpenVPN I get the following error:

Thu Sep 25 11:26:15 2014 191.177.89.124:54063 WARNING: Failed running command (--tls-verify script): returned error code 1  Thu Sep 25 11:26:15 2014 191.177.89.124:54063 TLS_ERROR: BIO read еls_read_plain text error: error:140890B2:SSL routines:SSL3_GET_CLIENT_CERTIFICATE:no certificate returned  Thu Sep 25 11:26:15 2014 191.177.89.124:54063 TLS Error: > TLS object -> incoming plaintext read error  Thu Sep 25 11:26:15 2014 191.177.89.124:54063 TLS Error: TLS handshake failed  

As you can see in the first line, looks like there's some error on the tls-verify script. I'm using exit /b 0 when found the second parameter (user CN). Someone has any clue how to do this script gets executed properly?

Have Exchange/Outlook process html tags in SMTP mail

Posted: 07 Nov 2021 10:01 PM PST

I am working with a third party application that sends simple SMTP email messages. The application doesn't respect line breaks or multiple spaces so the resulting email is barely legible.

Since the message comes through our SMTP server Outlook sees it as plain text. Can I including html tags in the SMTP mail message and have Outlook process the tags to make the output more user friendly?

Bind DNS for wildcard subdomains

Posted: 07 Nov 2021 06:05 PM PST

I'm on a Windows 7 VM with Apache. I'm trying to get Bind DNS setup to route subdomains to the same place as the main domain. The main domain routes correctly to 172.16.5.1 as specified in the hosts file. But the subdomains are still routing to the 127.0.0.1

I haven't added anything to the httpd.conf because I don't *think I need to. Here are my bind files. Any ideas of what might be wrong? I'm also not sure what 'hostmaster' should be changed to, if anything, in the zone file.

/etc/named file:

options {      directory "c:\named\zones";      allow-transfer { none; };      recursion no;  };      zone "." IN {      type master;      file "db.eg.com.txt";  };    #allow-transfer { none; };    # Use with the following in named.conf, adjusting the allow list as needed:  key "rndc-key" {      algorithm hmac-md5;      secret "Rha8Z8AKxOeg+asqZQ==";  };    controls {      inet 127.0.0.1 port 953          allow { 127.0.0.1; } keys { "rndc-key"; };  };  

zones/db.eg.com.txt file:

$TTL 6h  @   IN SOA  eg.com. hostmaster.eg.com. (              2011100911              10800              3600              604800              86400 )    @       NS  eg.com.  *   IN A    172.16.5.1  

New installation Zimbra mails end up in Spam folder of Gmail and Yahoo

Posted: 07 Nov 2021 05:02 PM PST

I have setup a new Zimbra Open Source installation. This is a new email server and the IP of this server is not Black listed in most prominent Block Lists. But what ever I have tried the emails still end up in the Spam folder. The headers which I received in my gmail account is as given is as below

http://pastie.org/1308039

Why did my cron job run twice when the clocks went back?

Posted: 07 Nov 2021 07:55 PM PST

The clocks went back an hour last night at 2am - British Summer Time ended. My backup job is scheduled to run daily at 01:12. It ran twice. This is on a Debian Lenny server.

man cron says:

if the time has moved backwards by less than 3 hours, those jobs that fall into the repeated time will not be re-run

The crontab entry is: 12 1 * * * /home/lawnjam/bin/backup.sh

What's going on?

No comments:

Post a Comment