Recent Questions - Server Fault |
- "failed to setup loop device" when mounting an Ubuntu iso image on Ubuntu
- How can I search for symlinks that point to other symlinks?
- Forward DNS to specific ports to access docker container
- Exim established many connections to strange ips
- nginx Proxy Manager Custom Locations
- QEMU freezes at "Booting from Hard Disk..." with -nographic
- Bring Cluster On-Line with only One Node
- Cisco asr 1001-hx
- Apache Mod_RemoteIP Support
- Degraded RAIDZ2 but cannot find the failed disk
- Realtek data corruption issue
- How to authorize only IP from a Fargate ECS service for MongoDB Atlas Cluster
- How to sync GCP Cloud Storage Bucket metadata to a database?
- App Version upgrades to all running instances in Google Instance Groups
- Can we use make tar multi-threaded without any sort of compression
- Iptables: how to allow forwarding from wireguard NIC only to some IP
- winhttp proxy settings not being picked up by system account
- Windows Server 2019 accessing shares over VPN for some users
- Sonicwall Global VPN user either can't reach internet, or LAN depending on Access List
- Elasticsearch connection refused
- yellow exclamation mark on the network connection
- What is the difference between aclinherit and aclmode?
- Exim not Listening on port 465 or 587 for TLS connection
- Getting error 1219 while there are no other sessions
- Let's Encrypt certbot validation over HTTPS
- OpenVPN tls-verify with batch script
- Have Exchange/Outlook process html tags in SMTP mail
- Bind DNS for wildcard subdomains
- New installation Zimbra mails end up in Spam folder of Gmail and Yahoo
- Why did my cron job run twice when the clocks went back?
"failed to setup loop device" when mounting an Ubuntu iso image on Ubuntu Posted: 07 Nov 2021 10:22 PM PST I downloaded an iso image of Ubuntu server, , but I received error: The same error shows up if I add PS, must I have super user privilege to mount an iso file? I saw many questions mount iso with sudo but experiencing the same error. How to check if I have permissions to use |
How can I search for symlinks that point to other symlinks? Posted: 07 Nov 2021 09:11 PM PST I have morass of chained symlinks like this scattered around: A (symlink) -> B (symlink) -> C (file) Some may even involve longer chains, I'm not sure yet. When manually examining a single file it's easy to see what's going on but I'm looking for an automated way to find these and (preferably) clean them up. In the example above I would want the B->C symlink to stay in place but the A->B symlink would become A->C Identifying them if the main goal; the quantity might end up being sufficiently low that fixing them manually wouldn't be a big deal. But finding them all manually isn't feasible. I'm aware of the "symlinks" utility and it is useful in many scenarios such as finding dangling symlinks, but it doesn't seem to have the ability to detect "chains" like this I'm aware of "find -type l" as well as the -L option for find but they don't seem to work together as "find -L -type l" always returns an empty response. I make use of "find -type l -exec ls -l {} +" (which works better than find's -ls option) and tried to use it to obtain a list of symlink destinations which I could then check to see if they're symlinks or not, however, all the symlinks are relative rather than absolute so the output is a bit messy for example I get outputs like this: From that I have to think a bit to figure out that the actual symlink destination is ./a/e/f/g/h (and I can then check to see if it's a symlink or not), not really ideal for automation If the symlinks were (temporarily) absolute rather than relative this would be easier but the "symlinks" utility can only convert absolute -> relative; as far as I can tell it can't convert relative -> absolute. |
Forward DNS to specific ports to access docker container Posted: 07 Nov 2021 09:38 PM PST I have 1 domain and 1 subdomain: I have 2 docker containers running 2 different applications, a frontend website and an API. These containers are accessible over 8080 (frontend) and 3000 (backend). Both domains are on an ELB over HTTPS and I have setup IP routing to forward traffic from http and https to port 8080 so the frontend web app is loading fine but the webapp needs to access the API through a different domain (subdomain) however I am completely lost on how to get api.example.com.au to load data from the API on port 3000. I thought perhaps an apache container accepting all traffic from example.com.au and api.example.com.au and then proxypass to the appropriate containers over the different ports but also unsure how to achieve this based on some examples I found...or even if this is the best approach. |
Exim established many connections to strange ips Posted: 07 Nov 2021 08:37 PM PST I installed VestaCP and used their mail server for my domain mails. But when I run netstat on my server,it shows some strange connections. There are no problems with my mail server until now, I just worry about these connections. Does my server meet any security problems? |
nginx Proxy Manager Custom Locations Posted: 07 Nov 2021 09:45 PM PST Ultimately I intend to configure nginx to proxy content from web services on different hosts. As currently set up I'm using nginx Proxy Manager with nginx in Docker containers. To exclude the complexities of web service setup from the issues of configuring the reverse proxy, I have set up web servers with static content.
As you can see from this grab I have nginx successfully set up to proxy my workstation's IIS (not to mention a public DNS entry for my external interface. That's working fine. These grabs show that the Apache container maps 80 to 8080 on the docker host which is imaginatively named dockerhost, and the browser on my workstation can access both the root document and another document by name. At this point I altered the nginx proxy host definition to define a custom location. Within the docker network Apache is on port 80; this is why I've specified 80 rather than 8080. This appears to work. ...until you try to load some other resource from Apache but get the same content. It appears that absolutely anything beginning with At this point I went back and looked for documentation but failed to find anything relevant. Swapping things around so that nginx proxies IIS and the custom location How should this configuration be expressed?A Proxy Manager based answer is preferable, I have quite a bit to learn before I can use instructions on hacking the nginx config directly. That said, for diagnostic use, here's the generated config. |
QEMU freezes at "Booting from Hard Disk..." with -nographic Posted: 07 Nov 2021 09:33 PM PST A QEMU VM, both the host and guest OS being Ubuntu 20.04. QEMU 6.1.0 is compiled without any special parameters. The guest was installed from a downloaded iso image of Ubuntu server. If I start the VM using , QEMU starts a VNC server and I can view in VNC Viewer that the guest Ubuntu OS is running properly. But if I start the VM using , QEMU prints out the following and freezes. I can see from |
Bring Cluster On-Line with only One Node Posted: 07 Nov 2021 05:54 PM PST I would like to set a policy such that my Failover Cluster will always come into service, even if only one (of the two nodes) is available. Background: I have only two nodes in the cluster, plus a witness quorum in a share on the DC. For this question assume that the DC stays in-service. (Windows Server 2019). If I shutdown node1, then node2 will be active. If I then shut down node2, then cluster will be stopped (obviously), however, if I then start only node1, the cluster will never recover. Not only will it not recover, without node2, but I don't see an easy way to make the cluster come into service with the cluster manager. The only way I can recover the cluster, in this scenario, would be to start node2, however, that does not seem (to me) to be real high-availability. IMO I should be able to set a policy or have a reasonably easy way to bring the cluster back on-line (perhaps after a waiting period), even if node2 never recovers. Am I just thinking about this the wrong way or missing something obvious? |
Posted: 07 Nov 2021 05:32 PM PST Can't seem to figure out what the wan throughput of a cisco asr 1001-hx is? |
Posted: 07 Nov 2021 05:26 PM PST I currently have an old Apache 2.4.18 server on Ubuntu 16. Does this version of Apache support mod_remoteip? I looked in the mod_remoteip documentation, but I haven't found about the minimal version of Apache that it is supported. |
Degraded RAIDZ2 but cannot find the failed disk Posted: 07 Nov 2021 05:05 PM PST I'm not too experienced with raid and ZFS things, and here's what I encounter. A raidz2 array was showing degraded status. One disk states So I assumed that I need to replace Before disk replacement: After replacing the disk, zpool status: I am also unable to find any info on that This issue lasts for weeks now, no one can figure out that mysterious |
Posted: 07 Nov 2021 04:13 PM PST I have just replaced an old PCI card in my PC for PCI Express Tp-link TG-3468 based on Realtek chipset ( Do you have any other ideas how I can confirm that there is indeed data corruption issue and how to debug/fix it? |
How to authorize only IP from a Fargate ECS service for MongoDB Atlas Cluster Posted: 07 Nov 2021 05:31 PM PST I have an ECS Fargate service mapped to an Application Load Balancer on AWS. In this service, there are several task that are frequently killed and restart. These tasks should be able to connect to a MongoDB Atlas cluster. Which IP should I whitelist for my Atlas cluster? Is it possible to have an elastic IP or a range of IPs for my service to allow only IP(s) of my service in my Mongo Atlas cluster? Sorry if this question is trivial, I'm struggle a bit on ECS, ALB and networking on AWS. |
How to sync GCP Cloud Storage Bucket metadata to a database? Posted: 07 Nov 2021 07:56 PM PST I have a large number of objects, currently around 1 million, stored in a GCP Cloud Storage Bucket. Objects are added at a rate of 1-2 thousand per day. I would like to efficiently run queries to look up objects in the bucket based on the metadata for those objects, including file name infix/suffix, date created, storage class, and so forth. The Cloud Storage API allows searching by filename prefix (docs), but the callback takes several seconds to complete. I can do infix queries with Rather than dealing with the Cloud Storage API for searching my bucket, I was thinking I could add a listing of all objects in my bucket to a database such as Bigtable or SQL. The database should stay in sync with all changes to the bucket, at least when objects are created or deleted, and ideally when modified, storage class changed, etc. What is the best way to achieve this? |
App Version upgrades to all running instances in Google Instance Groups Posted: 07 Nov 2021 06:33 PM PST We have micro service architecture based application. This is my first microservice application and my basic need is auto scale. We have decided to deploy it on google cloud Compute Engine Managed Instance Group, that has health check ups, auto scale and autoheal properties, and thus suits our current needs well. I want to deploy each service on their instance groups and use load balancer to transfer traffic to services. At this stage I do not want spend time on Dockers and Kubernetes for MVP roll out, as that will be new learning for me and the team and will delay the launch. In short DevOps will be stage 2 for us, as each service war takes just few seconds to deploy on server and that is good for now. My question is about how to deploy new versions automatically to all the instances. If I have 10 instances in each instance group, will I need to deploy war one by one to each instance or there is auto-deployment of app code? I did web research but could not find any connected answers in tis context. There is auto deployment of configurations changes only for underlying Instance Template. However no document speaks that if I have update war with latest version of application code to the instance template, it will auto update all connected instance (like the configuration changes). Thanks for your help. |
Can we use make tar multi-threaded without any sort of compression Posted: 07 Nov 2021 04:19 PM PST I have to backup 500GB using the traditional Is there any way to make tar multi threaded (like it can add multiple files to the tar archive at once) |
Iptables: how to allow forwarding from wireguard NIC only to some IP Posted: 07 Nov 2021 04:44 PM PST ContextI successfully integrated Wireguard in my LAN so I could access my NAS (192.168.1.45) from the outside. Packets forwarding through my VPN server relies on:
(this is the command wireguard execute when I stop wg0) NeedThis works like a charm but how could I restrict things so a client entering my LAN trough this VPN entrypoint could only access 192.168.1.45 and no other IP? Is it compatible with ip forwarding? Ideally, if this could be entirely managed in the PostUp PostDown wireguard's directives (independently of the previous rules on the system), this would be amazing . Tried some but, let's face it, I am more of a developer than a network administrator |
winhttp proxy settings not being picked up by system account Posted: 07 Nov 2021 06:15 PM PST We are in the middle of setting up Exchange Hybrid with new 2016 Exchange servers in their own network segment which the HCW was run and they have access to the Microsoft 0365 endpoints only through our firewall and are our existing Exchange 2010 mailbox servers which are on a different network segment and have never been internet facing. Mail flow, Legacy public folders and Free/busy lookups are working as correctly from EXO to these entities hosted on Exchange the 2010 servers. To get free/busy lookups working from EXO to users on Exchange 2010 we implemented the following using an elevated command prompt on our Exchange 2010 servers, to point to a proxy that has been set up to route to the Microsoft 0365 endpoints netsh winhttp set proxy proxy-server="172.22.90.102:80" bypass-list="localhost;127.0.0.1;*.dom.com;exe10sever01;exe10server02;ex16server01;exc16server02" The issue we are facing is that Exchange 2010 users cannot successfully lookup free/busy information for EXO users, despite the proxy being in place. When testing using PsExec -s -i to launch Internet Explorer on the 2010 servers, with just Detect Settings selected in Internet Options/Connections/LAN settings I do not see any traffic to our proxy being recorded in Wireshark I'm unable to connect to specific microsoft urls such as https://nexus.microsoftonline-p.com/federationmetadata/2006-12/federationmetadata.xml, which just times out. However if I launch IE again with PsExec and set the proxy details directly into IE I see the traffic being directed to the proxy server and the urls open. Disabling Antivirus, firewalls etc on the Exchange servers make no difference to the outcome, is there some registry setting or something that I'm missing that is stopping the system account from using the proxy settings for Exchange? |
Windows Server 2019 accessing shares over VPN for some users Posted: 07 Nov 2021 09:27 PM PST I'm using Windows Server 2019 and have setup a OpenVPN 2.5.4 Client as a service to start a VPN link back to my pfSense box at another location. The problem is only some Windows Server users can access the Samba shares on a Debian machine on the other side of the VPN. If I login as user A, they can see the mapped drives at \192.168.0.4 but user B cannot. User B can however ping the 192.168.0.4 machine. The VPN is starting up on boot of the Windows Server and connecting. It works for user A and the Administrator. The error message I get for user B when trying to access \192.168.0.4 in the explorer is; I've tried setting user B as an Administrator, rebooting and logging in again but that didn't work. It feels like a permissions problem. The first time I tried accessing a Samba share as User A it prompted me for credentials, but it doesn't do this for User B. |
Sonicwall Global VPN user either can't reach internet, or LAN depending on Access List Posted: 07 Nov 2021 10:01 PM PST I have a Sonicwall running firmware 6.5.4.4-44n and have a standard VPN (not SSL-VPN) setup which I'm connecting to via the Global VPN Client for Windows. The WAN Group VPN is setup to be a "Split Tunnel" and I have both "Set Default Gateway as this Gateway" and "Apply VPN Control List" NOT checked (checking either doesn't seem to make a difference in the behavior) What I would like to accomplish is users connected to the VPN can access the "X0 Subnet" (which is an Object defined as 10.0.0.0/255.255.255.0) through the VPN and the rest of the internet via their own external connection (NOT route internet traffic through the VPN). That I've found is my users can either:
Perhaps I'm missing what "VPN Access" means, but this seems like the opposite behavior as what I would expect. (Giving "X0 Subnet" access results in the user not being able to access the "X0 Subnet"). I've been trying different configurations and following various internet posts for the past 2 days without making any progress. Does anyone have an idea of what is going on here? With "LAN Networks" in the access list, here is my client route map. My (non VPN client network is 10.0.2.0/24. The remote network I'm trying to access is 10.0.0.0/24, which is in the "LAN Subnets" list) Thanks in advance |
Elasticsearch connection refused Posted: 07 Nov 2021 05:02 PM PST I have just installed ElasticSearch 7.1.1 on Debian 9 throw apt-get repository VPS 4GB ram .. 1vcpu test curl enviroment vars |
yellow exclamation mark on the network connection Posted: 07 Nov 2021 07:02 PM PST Ok here is the scenario. A couple of days ago our firewall (a domain joined TMG also configured for NAT and VPN gateway) server got compromised. As a result, it was taken down immediately and replaced the NAT gateway with a small router(temporarily) until a suitable device is arranged. DCHP service is running on a DC and is leasing addresses ok. However, the servers on the network now have a yellow exclamation mark on the network connection indicating the network connection as unauthenticated and network profile on the servers is now set to public. When changing the network profile to domain it goes back to public automatically This is causing multiple issues on the network due to the The servers are able to contact DNS, DHCP server, and internet Servers are also able to contact the domain controllers Symantec SEP is used as a firewall on the servers. Any ideas what could be causing this problem.? |
What is the difference between aclinherit and aclmode? Posted: 07 Nov 2021 06:05 PM PST ZFS filesystems can have the Unfortunately, the official documentation is a bit cryptic/ambiguous as to exactly what the difference is between these two properties in terms of their role in computing ACLs. To illustrate, take these excerpts from Securing Files and Verifying File Integrity in Oracle® Solaris 11.3, emphasis mine:
and:
This is really confusing, because ACL inheritance is going to occur or not occur when a file is initially created! As for I have a feeling this is also complicated by variables in the APIs used to create files. (I am familiar with the Windows APIs but not *nix ones.) I feel like even after reading through the documentation I have a rather incomplete picture of how these properties work. What exactly is the difference between the two? They seem to have some overlap, so what governs which is applied? What if they contradict? |
Exim not Listening on port 465 or 587 for TLS connection Posted: 07 Nov 2021 08:09 PM PST I am configuring Exim on a Ubuntu server to send and receive mails via TLS. Followed many guides which shows on how to configure Exim with TLS but still my Exim doesn't listen on 465 or 587 Exim only listen's on port 25 and I am able to send an receive mails This is the official guide that I followed: https://help.ubuntu.com/community/Exim4 But still no luck, also I cannot find any reference in the config files which indicates on which ports is exim listening I have also allowed the ports 465 and 587 via ufw using the command: but still Exim won't listen on 465 or 587, can anybody help me on why this is happening or is there are steps that I am missing |
Getting error 1219 while there are no other sessions Posted: 07 Nov 2021 07:02 PM PST PC's in our organisation run Windows 10 Pro and are sometimes shared between users (local accounts, no domain and AD). I have written a batch script that users execute when mounting our network shares to a drive letter. Most of the time it runs fine, but seemingly randomly it returns error 1219. The first part of the script clears the network shares before mounting them again (so another user can logon). This works fine and afterwards the net use command tells me there are no more connections. I ran into the problem of cached user credentials a while ago so I decided to add the following lines to remove the stored credentials as well. This also works fine and removes the credentials that windows stores for our fileserver. The last part of the scripts mounts the network shares using the credentials the user provided. These last lines return the error code 1219 from time to time telling me that there should not be multiple sessions using different credentials to the same server. A reboot or manually adding the shares usually works in this case. I think I must be missing something but after some research the only solution given is to execute |
Let's Encrypt certbot validation over HTTPS Posted: 07 Nov 2021 07:54 PM PST Update: The original SNI challenge type has been disabled. There is a new more secure SNI challenge type with limited server support. SNI is not likely a suitable option for small sites. I have configured HTTP to allow /.well-known/ over HTTP and refuse or redirect all other requests. All domains are configured to use the same file system directory. The server adds a 'Strict-Transport-Security' header to get supporting browsers to cache the upgrade requirement. DNS CAA records limit the CAs that can provide certificates. Original response: From the docs of the Certbot webroot plugin
On a privately used home server, I have port 80 disabled, that is, no port-forwarding is enabled in the router. I have no intention of opening up that port. How can I tell certbot that the validation server should not make a HTTP request, but a HTTPS (port 443) request, to validate the ownership of the domain? The validation server shouldn't even have the need to validate the certificate of the home server, as it already uses HTTP by default. I might have a self-signed certificate, or the certificate that is up for renewal, but that should not matter. Currently I am in the situation where I need to enable port 80 forwarding as well as a server on it in order to create / renew certificates. This doesn't let me use a cronjob to renew the certificate. Well, with enough work it would, but I already have a server listening on 443, which could do the job as well. |
OpenVPN tls-verify with batch script Posted: 07 Nov 2021 09:35 PM PST I want to execute a batch script to verify if the common name of the user is present in some TXT file, if yes, authorize the connection, otherwise deny. My server.ovpn is: And my verify.bat script is: I did that echo in log.txt to see if I get the certificate depth and the X509 common name inside the file, but doesn't appear nothing. And in the OpenVPN I get the following error: As you can see in the first line, looks like there's some error on the tls-verify script. I'm using exit /b 0 when found the second parameter (user CN). Someone has any clue how to do this script gets executed properly? |
Have Exchange/Outlook process html tags in SMTP mail Posted: 07 Nov 2021 10:01 PM PST I am working with a third party application that sends simple SMTP email messages. The application doesn't respect line breaks or multiple spaces so the resulting email is barely legible. Since the message comes through our SMTP server Outlook sees it as plain text. Can I including html tags in the SMTP mail message and have Outlook process the tags to make the output more user friendly? |
Bind DNS for wildcard subdomains Posted: 07 Nov 2021 06:05 PM PST I'm on a Windows 7 VM with Apache. I'm trying to get Bind DNS setup to route subdomains to the same place as the main domain. The main domain routes correctly to 172.16.5.1 as specified in the hosts file. But the subdomains are still routing to the 127.0.0.1 I haven't added anything to the httpd.conf because I don't *think I need to. Here are my bind files. Any ideas of what might be wrong? I'm also not sure what 'hostmaster' should be changed to, if anything, in the zone file. /etc/named file: zones/db.eg.com.txt file: |
New installation Zimbra mails end up in Spam folder of Gmail and Yahoo Posted: 07 Nov 2021 05:02 PM PST I have setup a new Zimbra Open Source installation. This is a new email server and the IP of this server is not Black listed in most prominent Block Lists. But what ever I have tried the emails still end up in the Spam folder. The headers which I received in my gmail account is as given is as below |
Why did my cron job run twice when the clocks went back? Posted: 07 Nov 2021 07:55 PM PST The clocks went back an hour last night at 2am - British Summer Time ended. My backup job is scheduled to run daily at 01:12. It ran twice. This is on a Debian Lenny server. man cron says:
The crontab entry is: What's going on? |
You are subscribed to email updates from Recent Questions - Server Fault. To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google, 1600 Amphitheatre Parkway, Mountain View, CA 94043, United States |
No comments:
Post a Comment