Recent Questions - Server Fault |
- Nginx not listening at local address
- `nohup` does not work properly with `&&`
- Nginx rewrite rule to make php see https scheme
- BIND9 how to have 2 reverse resolutions for 2 different domains
- How to increase the number of groups send by ADFS via SAML to Jenkins?
- What software options are available for setting up a file caching server?
- RAID5 compatability: zeroed superblock: WD Red 8TB drives: wd80efax-68LHPN0 and 2x 68KNBN0
- Unable to access Samba, Apache2 on Ubuntu Server
- Is HAProxy in front of Stunnel with SNIs possible?
- Only federate some users in AzureAD and not a whole domain
- ZFS and SAN: issue with data scrubbing
- Errors on a zpool filesystem
- How to enter "special" characters in the password file?
- snort3 Undefined variable in the string: HOME_NET
- Dual Gateway Setup in Mikrotik
- Connecting Google Cloud Functions across Projects
- Ubuntu 20.04 time sync problems and possibly incorrect status information
- forwarding proxmox vnc websocket with nginx
- failed to get D-Bus connection: Operation not permitted
- SCCM Device Collection Membership based on Machine Variable
- ERR_CONNECTION_TIMED_OUT (unless I'm using a proxy)
- if secondary dns server is down ubuntu can not resolve
- Debian: How to create SOCKS proxy server to exit on specific network interface?
- IIS: acess denied to Web.Config file
- RDCMAN Remote Desktop Connection Manager doesn't allow all clicks or clicking
- iotop fields - What does 'TID' mean in iotop?
- Where in the US is the best geographic location to host servers for the UK/Europe market?
Nginx not listening at local address Posted: 07 May 2022 01:58 AM PDT When I go to localhost on my pc, I can connect, but when I go to my router's public ip on the host pc, the page gets timed out. It works on my phone and I am able to see the website. Here is my nginx configuration: (I've replaced the listen address with ***): |
`nohup` does not work properly with `&&` Posted: 07 May 2022 02:33 AM PDT I want to make a delayed background execution, for delay I use Simple example of
And of course you can find print out of the execution of But I want to delay the execution, like 15 seconds, so a little combination:
This time it does not work properly, What is wrong with |
Nginx rewrite rule to make php see https scheme Posted: 07 May 2022 12:36 AM PDT I want my pho application to only see the https scheme even if the secure connection is already terminated. I have the following setup: Browser --https--> nginx --http--> nginx --> php-fpm socket Now I need the php application to only note about the original https scheme request. Is that even possible? The only alternative I see is to make the nginx to nginx traffic also over https. But I want to avoid the overhead for local traffic. |
BIND9 how to have 2 reverse resolutions for 2 different domains Posted: 06 May 2022 11:47 PM PDT I have one server Bind9 and 2 different domains. I'ld like to have reverse resolution for each domain. I've tried this configuration below but I get the error in My configuration : What should I do ? Have only one file for the 2 reverses addrreses ? |
How to increase the number of groups send by ADFS via SAML to Jenkins? Posted: 06 May 2022 11:21 PM PDT Yesterday we managed to integrate the CI Server Jenkins with Microsoft ADFS via SAML 2.0. When configuring the roles in Jenkins to the recieved groups of the user we noticed that only 80 groups are shown in the user profile in Jenkins. Looking in the logs it seems that only 80 groups were send via the SAML Repsonse. Unfortunately the groups with we use to manage the access control were not there. I assume because some limit of groups was reached that the remaining groups were left out. Is there any way to increase the number of groups send by ADFS? Or is Jenkins limiting the number of groups somehow? A read somewhere that ADFS tends to flatten nested LDAP groups, thats why this limit is reached. |
What software options are available for setting up a file caching server? Posted: 06 May 2022 10:36 PM PDT To lower costs, we were planning on setting up a local caching layer for file serving for our game locally. We would have a 5Gb/S Up/Down fiber link. Ideally, if a file is missing, the software would download it and cache it and then forward it to the requesting user. I am new to this type of server and I wasn't sure if there was already an existing software solution for this task? Ideally, I would like to host it on a Mac or linux box, but if there is a good Windows only solution, I'd be open to that as well. Total throughput tops out at around 4000 request per minute and 200MB per minute |
RAID5 compatability: zeroed superblock: WD Red 8TB drives: wd80efax-68LHPN0 and 2x 68KNBN0 Posted: 06 May 2022 10:25 PM PDT Summarizing the problem: I have been scraping by with JBOD for years, but finally need a real 'micro data center'. I bought 3 drives for my centos8-stream box over a couple months, but I have heard it could be both good and bad to get the same drives from same lot number. They were all WD Red TB drives: WD80EFAX*. But devil in the details, the first one was the helium filled wd80efax-68LHPN0, Mfg In July 2019, the later two on sale were air filled WD80EFAX-68KNBN0 from 2020. The cases looked different, but I proceeded anyway as they were the same major version and most retailers don't even list or differentiate the rest. Unfortunately my first attempts are not going well, and sure enough it is the lone helium filled one that seems to be not re-joining the mdadm/RAID array. Details and any research: I am using this as a storage/NAS, not a webserver, for now. I don't need it to be available at boot, in fact, might not want that possibility depending on how the computer is being used that day. Maybe I'm having the evil maids/масаж over(иди нахуй!!1), and have to run out for a sec, which might turn into a day or two, but then need to re-join the Ukraine IT Army, and I have no trusted partner physically close that day, so I need to understand and adjust the configuration at any time. Not that I am that cool, but it would suck to have this unreliable array write at 1/10 the speed at the worst possible time. Привет Мир! Anyway, I created my array as such: skipped setting up a partition, as heard this is not needed if you just want one contiguous volume that you'll never change, and can make things more difficult, even. It seems to work fine except the one drive that won't associate with the array on its own.
opened, set up filesystem. For future flexibility, I chose LVM verify successful creation with: create volume group with Actually had to use Time to make the FS:
sde // disk └─md0 // our raid └─crypt-storage //enc container with identical printout for /dev/sdd, and /dev/sdc. It is opened and mounted now. opened /etc/fstab, can see `/dev/mapper/devicename is mounted at /run/media/username/moutpoint. If I wanted it to mount at boot, at this point, I would need to do some extra things. But I don't, I want to mount it myself as needed. For good measure, create mdadm.conf with If I want to mount at boot, I need to update /etc/crypttab with the UUID of my raid device I rebooted, but lsblk shows problem: (trimmed) ` sdc ` it seems that the helium filled wd80efax-68LHPN0 (/dev/sdc here) is not being put into the md0 array (even before the lvm on luks). It thinks there is a partition 'sdc1'. Is this due to a problem with how I configured it, different hardware or different firmware? like mentioned before, retailers often won't provide those last characters of the model number, so I would hope all WD80EFAX* would work together? WD doesn't seem to provide firmware for plain hard drives?? https://community.wd.com/t/firmware-wdc-wd80efzx-68uw8n0/218166/2 Is duckduckgo not finding it? (sometimes it can beat google, for sure the WD search function) I have taken apart some logic boards before, and always wanted to mess with the firmware, but not while all of my data has just been moved (this was the backup) to this encrypted stack, with some previous drives already wiped to be used as large thumbdrives. Shame on me for not having another backup, it is still readable, but I am also resource constrained and trying to consolidate these 5+ JBOD drives so I can organize this never ending increase of data in my life to have time for more volunteer/business efforts. At least now I have learned the dreaded step of recovering/working on an encrypted raid array, to some degree. When appropriate, describe what you've tried: I can stop the (not re-add it, and afraid of using So I (re)added the same drive, let it sync, using using The good drives : So I suspect the problem is here if it is something I did in the configuration. Some posts elsewhere suggest something might be wiping the superblock, but I am not aware of anything in my shutdown or reboot that would do this, but I have not done an evil maid audit in some time, nor do I want to be required to do one, and I would use the tools and notes on this machine to do so. It's probably not that, but its possible. I could still update grub, or my /etc/crypttab, but it really seems I shouldn't need to do this, this is basically a giant RAID thumbdrive I'll sometimes need to hook up to the rest of the system. It seems something else screwy is going on relating to superblocks, hardware or firmware, especially since there are visible differences on the case and the board between the helium drive, and the two others that work fine (unfortunately). any ideas? like:
not interested in:
There seem to be enough open tools and knowledge to figure this out, it isn't rocket science, but it is a ton of moving parts working together I might not be completely familiar with, after a few weeks of tinkering, crawling posts, and RTFMs, thought I should ask. I hope I have provided enough detail but not dragged it out. Thanks in advance!! |
Unable to access Samba, Apache2 on Ubuntu Server Posted: 06 May 2022 09:07 PM PDT I've got a PC running Ubuntu 20.04 that I used as my home server with a VM, Samba, webserver, DDNS, SSH etc but randomly it just stopped showing up on my local network for half the services. I can no longer connect to my Samba shares from any devices (connection timed out) but locally on the machine via 127.0.0.1 they appear just fine. Apache no longer works, I can't access my webpage internally on the network or externally (HTTP port 80). The machine can access the internet just fine, can be pinged from other devices and I can SSH in just fine, I haven't changed anything on the router and it shows up just fine under 192.168.1.3. I can also perform remote iperf3 tests to it just fine so it isn't a router/ISP issue I believe (although the results are much slower than they should be) All the services mentioned above are running on the machine fine, I reinstalled apache2 and then tried NGINX and can confirm they both bind to port 80 with I'm totally stuck as to what is causing this, everything else on my network works fine and the only settings I have for this PC are a couple of external port forwards for 80, 433 and 5201 setup on the router. I'm currently connected to my home network over VPN but the issues are present over VPN and when connected to local Wi-Fi. Any help to solve this would be greatly appreciated :) Thanks ifconfig gives: |
Is HAProxy in front of Stunnel with SNIs possible? Posted: 06 May 2022 09:45 PM PDT I have a working SSL Termination with STunnel in front of HAproxy. Recently, the matter of adding support for HTTP/2 was thrown my way. That is easy with HAProxy, but, as a constraint, STunnel must stay. The reason for STunnel needing to stay is about 17000 lines of SNIs and the possibility of managing those via an already in place API. I could very well add a cert-list for HAProxy containing the SNIs, a couple of greps and echos will do the tick. However, during my searches I haven't yet found anyone putting HAProxy in front of STunnel in front of HAProxy. Is that the wrong approach? Here's what I already started working on (no SNIs in there yet - 17000 of them would be a bit too much for a post): HAProxy frontend (where I need to add HTTP/2 support) with encryption towards STunnel: STunnel HAProxy "backend" I assumed encryption is required from HAProxy to STunnel, and I would need to account for any protocol mismatches between those. What the STunnel verion of HAProxy's tcp-request connection expect-proxy layer4 if STunnel would be? Any help in getting HTTP/2 support with STunnel is greatly appreciated, as well as getting a "Don't do that, it's wrong". Thank you, |
Only federate some users in AzureAD and not a whole domain Posted: 07 May 2022 01:33 AM PDT We want to test a new IDP in our organization ( this IDP is an inhouse SAML-compatible idp ). We are using AzureAD. If we federate a new domain, we can test the authentication, and it works ( xxx@NewDomain.Com). Now, we want to select some real users from our main domain ( User1@MainDomain.com ), and federate only these users so that they can start testing the idp without interrupting all the other users. Is this possible? Can we federate only some users to use an IDP in AzureAD, or it must be always a whole domain ? Our goal is to achieve a gradual migration of the users, so that we can fix eventual first bugs with minimal impact. |
ZFS and SAN: issue with data scrubbing Posted: 07 May 2022 01:12 AM PDT Working as scientists in a corporate environment, we are provided with storage resources from a SAN within an Ubuntu 20.04 virtual machine (Proxmox). The SAN controller is passed directly to the VM (PCIe passthrough). The SAN itself uses hardware Raid 60 (no other option is given to us), and presents us with 380 TB that we can split in a number of LUNs. We would like to benefit from ZFS compression and snapshotting features. We have opted for 30 x 11 TB LUNs that we then organized as striped RAID-Z. The setup is redundant (two servers), we have backups and performance is good which oriented us towards striped RAID-Z in favor of the usual striped mirrors. Independent on the ZFS geometry, we have noticed that a high writing load (> 1 GB/s) during ZFS scrubs results in disk errors, leading eventually to faulted devices. By looking at the files presenting errors we could link this problem to the scrubbing process trying to access data still present in the cache of the SAN. With moderate loads during the scrub the process completes without any errors. Are there configuration parameters either for ZFS or for multipath that can be tuned within the VM to prevent this issue with the SAN cache? Output of zpool status Output of multipath -ll |
Posted: 07 May 2022 01:13 AM PDT I'm using ZFS on a Debian 9 machine. This machine has been working for years without any problem until today. The zfs pool is mounted on top of a RAID system, controlled by hardware (so only one drive is exposed to Linux as sda). You can see the output of "zpool status" below. Before continuing, just mention that I checked the consistency of the RAID, and everything is fine. Suddenly, all accesses to the filesystem provoke the command to freeze (even an ls command), and eventually, I need to reboot the machine manually. When running So, the main question is: What is the meaning of those files? How do I fix this problem? Thank you in advance! |
How to enter "special" characters in the password file? Posted: 07 May 2022 12:17 AM PDT What is the range of characters allowed in the password field in the My password has the PS: The credentials are for Exim as a client to a "smarthost". |
snort3 Undefined variable in the string: HOME_NET Posted: 07 May 2022 01:08 AM PDT I have installed snort3 on my ubuntu server using this URL from the snort web site: I have compiled it according to the instructions and edited /usr/local/etc/snort/snort.lua to add my HOME_NET and other variables as per the document. Once I enable the snort3-community.rules I see these errors. These variables are defined in: -
But are not seen in the rules? Can anyone suggest why. |
Dual Gateway Setup in Mikrotik Posted: 06 May 2022 11:54 PM PDT I'm new to Mikrotik environment, and I need some help for the following scenario:
What I want to do is as follows:
Additional informationThe gateway for first ADSL is So far, I have managed to access the gateway of the second ADSL, but when I ping the actual destination address of Can anyone help for the above scenario with a complete solution? |
Connecting Google Cloud Functions across Projects Posted: 06 May 2022 11:08 PM PDT I am using Google Cloud Functions and have multiple projects with cloud functions, that need to communicate with each other. My problem is that functions can only communicate with each other if they have Ingress settings set to "allow all traffic." As soon as I change it to the desired setting, which is "Allow internal Traffic Only" projectB can't talk to projectA. The two projects are Firebase projects which have a VPC network configured as well as Serverless VPC in order to communicate with a back end database. From what I can tell, Google is saying this I should create a VPC SC Perimeter which includes all the projects that need to talk to each other, this is meant to solve the problem. I have done that but I still have access issues if set to "allow internal traffic only" I also tried setting up a vpc network with a static private ip address . From projectB I then tried to communicate to ProkectA on the private IP but I am getting timeout errors. Both projectA and projectB have vpc set up with internal private ip's. I also tried using VPC peering between the projects, but still get the timeout issue. Could anyone offer any advice? |
Ubuntu 20.04 time sync problems and possibly incorrect status information Posted: 06 May 2022 11:58 PM PDT I have been having some problems with crashes on my KVM host (Lubuntu 20.04), and when troubleshooting, I noticed some time-related errors. Upon further investigation, to my horror, I saw that time was not being synced. I am sure it was set up before, I have no clue how it became un-setup. I found this thread and tried the top answer, but no no avail. https://askubuntu.com/questions/929805/timedatectl-ntp-sync-cannot-set-to-yes I thought maybe I needed to use some more up-to-date instructions, so I tried this: https://linuxconfig.org/how-to-sync-time-on-ubuntu-20-04-focal-fossa-linux Then I tried this, from a different thread: I have never touched timesyncd.conf, but it is entirely commented out anyway: I checked timedatectl again, and now it is on, but still not using NTP. I understand that NTP is more precise, and that can be important in some situations. Not sure if virtualization with pci passthrough needs extremely precise time or not. From other stuff I was reading, I thought maybe NTP was conflicting with timesyncd. So remove ntp for the time being: But after purging ntp, NTP showed as active! Am I going crazy? Is NTP still here somehow? Nope. Apologies for not asking a more focused question, but what the heck is going on here? I am well and truly lost. Also, I will edit this post later and make a not as to whether removing NTP (and thus activating it?!) fixed the stability problems that led me down this rabbit hole. Edit: The next thing I did was disable ntp on timesyncd and (re)install NTP as described here. https://www.digitalocean.com/community/tutorials/how-to-set-up-time-synchronization-on-ubuntu-18-04 That resulted in: I reversed those changes as recommended my Michael Hampton: Does this mean it's working? So I guess it is working. Since the crashes that took me down this path are still happening, I guess the time wasn't the issue. |
forwarding proxmox vnc websocket with nginx Posted: 06 May 2022 10:04 PM PDT I installed nginx in order to be a lazy person and just go to proxmox.domain.com instead of proxmox.domain.com:8006, but now I can't access the VNC client when connected to the first address, although I can doing the ip+port. A friend of mine pointed out that I have to forward web sockets, so I hit the keyboard and googled it and found this. I tried everything in there, and it isn't working. I have restarted nginx and it said that the config file worked. This is the block of config in my |
failed to get D-Bus connection: Operation not permitted Posted: 06 May 2022 11:54 PM PDT I'm trying to list services on my CentOS image running in Docker using but I get this error message: Any suggestions what the problem might be? |
SCCM Device Collection Membership based on Machine Variable Posted: 07 May 2022 01:08 AM PDT I'm not sure if this is quite possible but I'm struggling with writing the WQL query statement that would allow me to have SCCM device collections populate based on a machine variable. Example: Device named "TestVM-01" has a machine variable named "PatchGroup" with a value of "Hour1". I would like the device collection called "Hour1" to dynamically populate any devices with the PatchGroup variable set to Hour1. I first struggled with just querying the device variables via powershell and WMI since the SMS_MachineVarible class is a lazy property of SMS_MachineSettings so you have to call the objects by their full path. In Powershell/WMI I can query it with something like this If you query SMS_MachineSettings without specifying the full path of the object, it will return the MachineVariables attribute as empty Would anyone be able to tell me how I would write the WQL for that to pull those list of objects from the SMS_Resource class "where PatchGroup = x"? |
ERR_CONNECTION_TIMED_OUT (unless I'm using a proxy) Posted: 06 May 2022 11:07 PM PDT I run my own online business as well as managing over a dozen self hosted sites for other people using the wordpress.org. platform. They're all hosted by a small company in the UK and if I do experience any problems the company are usually quick to sort them out. However... Right now, using Chrome or Safari (on an iMac and on a PC) I'm getting the message ERR_CONNECTION_TIMED_OUT when attempting to login to the wp-admin; or even if I just want to view the sites. It's not the first time this has happened, and I've done all the usual things - cleared the browser cache, double checked the wi-fi connection, used a 'is it down or is it just me' site etc. etc. Btw, the sites are accessible from elsewhere (but this doesn't help me, I live and work out in the sticks.) I've done pings and traceroutes and copied my hosting provider into these (no reply, yet.) I can access the sites using a proxy (e.g. anonymouse) but can't edit them in this way of course. Anyway, this wouldn't be a great solution, I want to be able to use Chrome or Safari. Anyone any ideas? |
if secondary dns server is down ubuntu can not resolve Posted: 06 May 2022 10:04 PM PDT I am experimenting with two local DNS server. When I take down the second (or the primary) dns server, I can not resolve any domain name. Using But when I try (primary DNS server is 10.0.3.4 and I have added an A recorde : testsrv.lan --> 10.0.3.4) I have used tcpdump to check what is happening under the hood : Isn't ubuntu (specifically resolvconf service) supposed to be "fault tolerant" when any of the two DNS servers is down ? is this the default behavior when resolving a domain name ? is it docummented any where ? can we change ? N.B: I am using ubuntu 14.04 server and the DNS is configured using /etc/network/interface Any help is appreciated. Thank you. |
Debian: How to create SOCKS proxy server to exit on specific network interface? Posted: 06 May 2022 11:07 PM PDT I have a setup with two internet connections.
How can I create a SOCKS 4/5 server that will take connections coming from eth0 and proxy the traffic through eth1 ? I saw that you can use ssh to create a simple SOCKS proxy, but I am unable to proxy the traffic through eth1. I also tried Dante, but with no success. |
IIS: acess denied to Web.Config file Posted: 07 May 2022 12:03 AM PDT I'm trying to set up a new website in a Windows Server 2003. In this server there is already a website, classic ASP, in port 80. I'm configuring this new one (ASP.NET 3.5) in port 82 with, actually, .NET Framework 4.0, as I keep getting an error when trying to install 3.5. When accesing the website I get an error saying access to web.config file is denied, if I access a test html file it loads ok. I also tryed adding an impersonate clause in web.config, for the machine admin user, but no success. Folder and files have correct permissions for IUSR_SERVERNAME, web server extensions are active and have permissions also (the .NET framework ones). User ASP.NET does not exist in this machine (I read somewhere you also need to give access to this user) so I don't know what else to try. Help please. Thank you |
RDCMAN Remote Desktop Connection Manager doesn't allow all clicks or clicking Posted: 07 May 2022 12:03 AM PDT I'm using RDCMAN 2.2 from WIN7 x64 to WIN7 x64. I can login fine to remote boxes and see the remote desktop, and see the mouse move, but!, I cannot click on everything I see IExplorer highlight as I move the mouse over it, but I cannot click it. Even stranger, I can successfully click the icon next to IExplorer, Media Player. I also cannot click the Windows Start button. I do not have any of these problems when I use the 'remote desktop' program itself. I would guess this is a security issue. |
iotop fields - What does 'TID' mean in iotop? Posted: 06 May 2022 11:58 PM PDT What does thanks. |
Where in the US is the best geographic location to host servers for the UK/Europe market? Posted: 06 May 2022 09:57 PM PDT We would like to keep our hosting in the US. But for European traffic, where is the best location for ping/response times? (East Coast, West Coast, Central etc) |
You are subscribed to email updates from Recent Questions - Server Fault. To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google, 1600 Amphitheatre Parkway, Mountain View, CA 94043, United States |
No comments:
Post a Comment