Dell T420 power supplies: how to convert from backplane BP connector to PSU P4? Posted: 26 Mar 2021 09:41 PM PDT I am trying to replace a failed nonredundant 550 W power supply in a Dell T420 with dual redundant 750 W's supplies. I got the parts (K501P power supply backplane, KKY3X power distribution board, two 5NF18's), bolted them inside and wired things up, and it almost works ... except for power to the disk drives. My T420 is the version with 4 cabled 3.5" drives and had a 0W44NJ power cable, with has a PSU P4 connector that looks essentially identical to the BP connector from the KKY3X power distribution board, but they are the same gender and thus cannot be connected. (Closer inspection reveals the wiring to the pins is a little different too.) Here's the end of the 0W44NJ cable: Here's the end of the BP cable (the sole remaining unconnected cable from the power supply board): I realize now that the redundant power supplies appear to be designed for the model of T420 with hot-swap backplanes, and so the BP connector is meant to plug into the backplane. However, I'm hoping against hope that there is a way to make it work, or at least to find a way to power the 4 hard drives somehow. Does anyone know of a way to convert the connectors, or supply power to the hard drives another way? |
batch windows running process dissapear when explorer restarted Posted: 26 Mar 2021 09:28 PM PDT i am running a program through bat file. sometimes later when a explorer.exe hang or restarted, the bat windows is nowhere to be found but the process still run in the background. the bat window i use to minimize to tray using menutools Q : how to attached the program output into a bat windows again without restarting the process? |
Reaching devices on Subnets Posted: 26 Mar 2021 09:19 PM PDT The following diagram will explain more easily my needs: I have a main router 192.168.0.1 which is handles 2 subnets and PCs. I need192.168.1.1 (about 30 IP's )and 192.168.3.1 (10 IP's) to be separated ( traffic, maintainability). I want that my PC's on main router 192.168.0.101 ( IP is just an example ) to reach end-devices 192.168.3.x , 192.168.2.x How do I start ? |
PHP: Class 'PragmaRX\Google2FA\Google2FA' not found Posted: 26 Mar 2021 08:34 PM PDT I'm trying to use Class 'PragmaRX\Google2FA\Google2FA' with the "use" command in PHP, but I keep getting not found, even though the file is in /usr/share/php. CentOS 7. I've yum uninstalled and re-installed, tried changing the include path in php.ini, tried explicitly setting it with set_include_path. No luck. Any ideas? |
Can't Connect To API Resource After Login Posted: 26 Mar 2021 03:06 PM PDT I created a React Front end that makes an axios post request to my api. The code coming from the front end is called after a user has successfully logged in. After the user logs in successfully an auth token is sent via a cookie, and that cookie is used to verify if the user is an admin and is able to access the api resource /api/v1/orders. This works when I am in a local environment, but after I uploaded this to the cloud I get this error: <MY_API>/api/v1/orders:1 Failed to load resource: the server responded with a status of 500 (Internal Server Error) Adminpage.js:31 Error: Request failed with status code 500 at e.exports (createError.js:16) at e.exports (settle.js:17) at XMLHttpRequest.p.onreadystatechange (xhr.js:62) REACT FRONT END Adminpage.js const loadOrders = async () => { try { const res = await axios.get( "<MY_API>/api/v1/orders", { withCredentials: true, } ); console.log(res.data.data.orders); setOrderData([...res.data.data.orders]); } catch (err) { console.log(err); } }; NODEJS BACK-END orderRoutes.js router .route('/') .get( // Only admins can access this route authController.requireSignin, authController.isAdmin, orderController.getAllOrders ) .post(orderController.addOrder); authController.js exports.requireSignin = expressJwt({ secret: process.env.JWT_SECRET, algorithms: ['HS256'], // added later requestProperty: 'auth', // Decodes the token and assigns to auth object in request object getToken: function (req) { if (req.cookies.Authorization) { return req.cookies.Authorization; } return null; }, }); exports.isAdmin = (req, res, next) => { // If users role specified in the req.auth object is not admin, an error is passed to the global error handler otherwise, is able to go to the next function if (req.auth.role !== "admin") { return next( new ApiError(undefined, 403, "User is not authorized for access!") ); } next(); }; My front end React code is hosted on AWS and my back end Nodejs is hosted on Azure. |
Bind9: Disable DNSSEC validation on per zone basis? Posted: 26 Mar 2021 02:47 PM PDT I am trying to make a caching / forwarding only DNS server using Bind9 with DNSSEC validation being enabled by default. Assume you have the following informations from my config file: acl "home-net" { 127.0.0.1; ::1; 192.168.1.0/24; 2000:db8:cafe:100::/64; }; options { forwarders { # Use Google DNS either by IPv6 or IPv4 is fine. 2001:4860:4860::8888; 2001:4860:4860::8844; 8.8.8.8; 8.8.4.4; }; dnssec-enable yes; dnssec-validation auto; allow-query { any; }; allow-query-cache { home-net; }; allow-recursion { home-net; }; }; zone "subdomain.example.net" { type forward; forward only; forwarders { # SAMBA PDC1 (Active Directory) 2000:db8:cafe:100::1; # SAMBA PDC2 (Active Directory) 2000:db8:cafe:100::2; }; }; As far as I understand: Whenever I want to lookup a host registered un the subdomain subdomain.example.net the nameserver would then contact one of the two SAMBA PDC that I have listed in the forwarders section in the zone configuration. The nameserver would in turn do DNSSEC validation to ensure that the two SAMBA PDC's is actually authorized to reply for requests to the domain subdomain.example.net . If the reply from SAMBA PDC's cannot be validated through DNSSEC, then the name server will turn to Google DNS and ask if they can provide a DNSSEC validated response. Now here is the problem: As I understand there are no DNSSEC support in SAMBA neither through using SAMBA INTERNAL_DNS or through BIND9_DLZ hence you cannot ever do DNSSEC validation on any zones maintained by SAMBA. As far I understand there are 3 options: - Disable DNSSEC validation globally.
- Use negative trust anchors.
- Use the 'validate-except' option.
I will handle them one by one. Disable DNSSEC It is not really an option in my book. It basically reduces your setup to "works worldwide" ... except you particular small corner of the world, so better disable it alltogether. It can be done by just changing the value of dnssec-enable and dnssec-validation to no . I will only use it as a temporary fix until I can activate DNSSEC again. Use negative trust anchors It first my interest was peaked. The idea is you register a special encrypted key with rndc and then it won't do any DNSSEC validation for the domain you want. However it is a temporary fix, since the key has a lifetime of at most one week. That means you have todo the same kind of sourcery as certificates from Let's Encrypt - only that the cron job have to be triggered more often. Use the 'validate-except' option In theory this should be the easiest solution of them all. I just have to add a new section to options called validate-except . Like so: options { dnssec-enable yes; dnssec-validation auto; validate-except { "subdomain.example.net"; "another.example.net"; }; }; Sounds simple enough - right? :-) ... Except my nameserver didn't start due to "unknown option - validate-except". EDIT: Turn out Raspberry OS uses Bind version 9.11 while the validate-except option was only implemented in Bind version 9.13. For reference sake Ubuntu 20.04 for Raspberry uses Bind version 9.16. So does anyone out there have experience with a mixed mode setup regarding DNSSEC? ... or would the easiest solution be admit failure and install Ubuntu 20.04? :-) |
What Happens When You Write Data To Raid While It Is Scrubbing? Posted: 26 Mar 2021 03:04 PM PDT I have a RAID 5 setup with almost 15TB of data capacity. In the past I have let it go unchecked for too long, and found that multiple drives had SMART errors and I want to prevent bad blocks. I am implementing a weekly scrub but I am wondering what will happen if an rsync timed schedule starts while the drives are scrubbing. Will the rsync just fail until the scrub is completed? Or will there be a multithreaded system where both of these operations will happen at once? Or worst case -- would it interrupt the scrubbing process so it would have to start over. My schedules are as follows: Daily job from 4 different local computers rsync to my local NAS at different times. 3:00am, 4:00am, 5:00am, and 6:00am. Daily job from my local NAS to a cloud server at 12:00pm. Weekly job Sunday at 3:00pm to scrub the raid. Surely, a scrub can take enough time that the other jobs would start while the raid is still scrubbing. Basically, I'm not sure what the implications are of writing new data to the drives as they are being scrubbed.... What are the best practices for handling this, with the priority being that the RAID gets scrubbed once a week and backups happen around that? Thank you |
Mac OS X El capitan - PING fails with UNKNOWN hosts Posted: 26 Mar 2021 02:22 PM PDT Ping fails with unknown hosts. Pinging from 192.168.0.15 host imac2 hugodiaz@imac2 ~ % ping 192.168.0.5 PING 192.168.0.5 (192.168.0.5): 56 data bytes 64 bytes from 192.168.0.5: icmp_seq=0 ttl=64 time=5.780 ms 64 bytes from 192.168.0.5: icmp_seq=1 ttl=64 time=4.008 ms 64 bytes from 192.168.0.5: icmp_seq=2 ttl=64 time=3.437 ms ^Z zsh: suspended ping 192.168.0.5 hugodiaz@imac2 ~ % ping 192.168.0.6 PING 192.168.0.6 (192.168.0.6): 56 data bytes Request timeout for icmp_seq 0 Request timeout for icmp_seq 1 ^Z zsh: suspended ping 192.168.0.6 +++++++++++ Route client list looks like this Host Name IP Address MAC Address Interface amazon-104e97522 192.168.0.8 34:af:b3:88:5b:f0 (WiFi)(0)(0) Primary Galaxy-A10e 192.168.0.9 3a:33:d4:ce:f9:37 (WiFi)(0)(0) Primary NPI1290A8 192.168.0.14 68:14:01:05:2a:db (WiFi)(0)(0) Primary imac2 192.168.0.15 98:9e:63:2c:6c:78 (WiFi)(0)(0) Primary android-474d7f6aabc9daee 192.168.0.4 a8:1e:84:4e:8a:2a (WiFi)(0)(0) Primary RokuPlayer 192.168.0.5 c8:3a:6b:26:74:16 (WiFi)(0)(0) Primary Unknown 192.168.0.6 d0:d0:03:a7:2d:91 <<<< Unknown (WiFi)(0)(0) Primary Upstairs 192.168.0.7 d8:31:34:f7:d0:36 (WiFi)(0)(0) Primary AmazonPlug037Q 192.168.0.3 dc:91:bf:3d:30:60 +++++++++ hugodiaz@imac2 ~ % netstat -rn | grep en1 default 192.168.0.1 UGSc en1 169.254 link#5 UCS en1 ! 192.168.0 link#5 UCS en1 ! 192.168.0.1/32 link#5 UCS en1 ! 192.168.0.1 f8:da:c:fb:bd:15 UHLWIir en1 1120 192.168.0.5 c8:3a:6b:26:74:16 UHLWI en1 978 192.168.0.7 d8:31:34:f7:d0:36 UHLWI en1 1165 192.168.0.8 34:af:b3:88:5b:f0 UHLWI en1 1165 192.168.0.14 68:14:1:5:2a:db UHLWI en1 729 192.168.0.15/32 link#5 UCS en1 ! 224.0.0/4 link#5 UmCS en1 ! 224.0.0.251 1:0:5e:0:0:fb UHmLWI en1 239.255.255.250 1:0:5e:7f:ff:fa UHmLWI en1 255.255.255.255/32 link#5 UCS en1 ! default fe80::fada:cff:fefb:bd15%en1 UGc en1 2603:7080:6902:2800::/64 link#5 UC en1 2603:7080:6902:2800::/56 fe80::fada:cff:fefb:bd15%en1 UGc en1 fe80::%en1/64 link#5 UCI en1 fe80::1cf1:65e2:3710:ab2e%en1 98:9e:63:2c:6c:78 UHLI lo0 fe80::6a14:1ff:fe05:2adb%en1 68:14:1:5:2a:db UHLWI en1 fe80::fada:cff:fefb:bd15%en1 f8:da:c:fb:bd:15 UHLWIir en1 ff00::/8 link#5 UmCI en1 ff01::%en1/32 link#5 UmCI en1 ff02::%en1/32 link#5 UmCI en1 |
We have 52 disk added in the VM. Now we need te space of 500GB for specific disk of 7No's. Help in identify disk name to increase the space in vcenter Posted: 26 Mar 2021 03:48 PM PDT output of lsscsi -s : [1:0:12:0] disk VMware Virtual disk 2.0 /dev/sds 257GB [2:0:2:0] disk VMware Virtual disk 2.0 /dev/sdy 257GB [2:0:9:0] disk VMware Virtual disk 2.0 /dev/sdae 257GB [2:0:15:0] disk VMware Virtual disk 2.0 /dev/sdak 257GB [5:0:2:0] disk VMware Virtual disk 2.0 /dev/sdan 257GB [5:0:9:0] disk VMware Virtual disk 2.0 /dev/sdat 257GB [5:0:15:0] disk VMware Virtual disk 2.0 /dev/sdaz 257GB VMWare settings Harddisk output: Hard disk 1 300 GB | SCSI(0:0) Hard disk 2 43 GB | SCSI(0:1) Hard disk 3 240 GB | SCSI(0:2) Hard disk 4 400 GB | SCSI(0:3) Hard disk 5 400 GB | SCSI(0:4) Hard disk 6 400 GB | SCSI(0:5) Hard disk 7 400 GB | SCSI(0:6) Hard disk 8 200 GB | SCSI(0:8) Hard disk 9 240 GB | SCSI(0:9) Hard disk 10 400 GB | SCSI(0:10) Hard disk 11 400 GB | SCSI(0:11) Hard disk 12 400 GB | SCSI(0:12) Hard disk 13 400 GB | SCSI(0:13) Hard disk 14 200 GB | SCSI(0:14) Hard disk 15 240 GB | SCSI(0:15) Hard disk 16 400 GB | SCSI(1:0) Hard disk 17 400 GB | SCSI(1:1) Hard disk 18 400 GB | SCSI(1:2) Hard disk 19 400 GB | SCSI(1:3) Hard disk 20 200 GB | SCSI(1:4) Hard disk 21 240 GB | SCSI(1:5) Hard disk 22 400 GB | SCSI(1:6) Hard disk 23 400 GB | SCSI(1:8) Hard disk 24 400 GB | SCSI(1:9) Hard disk 25 400 GB | SCSI(1:10) Hard disk 26 200 GB | SCSI(1:11) Hard disk 27 240 GB | SCSI(1:12) Hard disk 28 400 GB | SCSI(1:13) Hard disk 29 400 GB | SCSI(1:14) Hard disk 30 400 GB | SCSI(1:15) Hard disk 31 400 GB | SCSI(2:0) Hard disk 32 200 GB | SCSI(2:1) Hard disk 33 240 GB | SCSI(2:2) Hard disk 34 400 GB | SCSI(2:3) Hard disk 35 400 GB | SCSI(2:4) Hard disk 36 400 GB | SCSI(2:5) Hard disk 37 400 GB | SCSI(2:6) Hard disk 38 200 GB | SCSI(2:8) Hard disk 39 240 GB | SCSI(2:9) Hard disk 40 400 GB | SCSI(2:10) Hard disk 41 400 GB | SCSI(2:11) Hard disk 42 400 GB | SCSI(2:12) Hard disk 43 400 GB | SCSI(2:13) Hard disk 44 200 GB | SCSI(2:14) Hard disk 45 240 GB | SCSI(2:15) Hard disk 46 240 GB | SCSI(3:0) Hard disk 47 240 GB | SCSI(3:1) Hard disk 48 240 GB | SCSI(3:2) Hard disk 49 240 GB | SCSI(3:3) Hard disk 50 240 GB | SCSI(3:4) Hard disk 51 240 GB | SCSI(3:5) Hard disk 52 21 GB | SCSI(3:6) Please help me in identifying the right disk to be space allocated. |
Permission denied (publickey). (gcloud.compute.ssh) [/usr/bin/ssh] exited with return code [255]. exited with return code [255] Posted: 26 Mar 2021 08:04 PM PDT try to connect the virtual machine with gcloud but failed. Please advise username@22.233.168.202: Permission denied (publickey). ERROR: (gcloud.compute.ssh) [/usr/bin/ssh] exited with return code [255] |
Synology QuickConnect Relay: Not Secure? Posted: 26 Mar 2021 06:37 PM PDT I was reading the QuickConnect whitepaper and was overall pretty impressed. But on page 10 it explains how the Relay service works and it's implied (but not explicitly stated) that packets are decrypted on the Relay Server. They go on to pinky swear they won't snoop your traffic: While providing the promised services, QuickConnect makes no use of collected data from registered Synology NAS servers except in delivering such services. For more details, please visit the Privacy Terms on our official website. As most of you probably are, I can be a little paranoid about Security. My previous setup used a tcp proxy server which allowed for E2E encryption to work natively. That was however somewhat brittle and left a public port exposed. It also added latency. Hole Punching is really cool and does seem to create a encrypted tunnel E2E so that sounds perfect for my needs. A few questions: - Is it correct that QuickConnect relay isn't encrypted E2E?
- Can I modify QuickConnect to fallback to my own relay?
- Is my assumption correct that Hole Punching is safer than simply forwarding a public port?
- Will hole punching work when putting my Synology in my router's DMZ?
- In which conditions will Hole Punching not work?
- Does QuickConnect uses UDP or TCP Hole Punching?
|
Nginx reverse proxy error - too many redirects Posted: 26 Mar 2021 09:03 PM PDT I have set up an Nginx reverse proxy but I am getting an error. I don't know why this is happening, something is wrong with my configuration file. This page isn't working - redirected you too many times. ERR_TOO_MANY_REDIRECTS Here is my conf file as Certbox generated it server { server_name base.4evergaming.com; error_page 403 https://www.4evergaming.com; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header Host $host; proxy_pass http://192.168.1.10; } listen [::]:443 ssl ipv6only=on; # managed by Certbot listen 443 ssl; # managed by Certbot ssl_certificate /etc/letsencrypt/live/base.4evergaming.com/fullchain.pem; # managed by Certbot ssl_certificate_key /etc/letsencrypt/live/base.4evergaming.com/privkey.pem; # managed by Certbot include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot } server { if ($host = base.4evergaming.com) { return 301 https://$host$request_uri; } # managed by Certbot listen 80; listen [::]:80; server_name base.4evergaming.com; return 404; # managed by Certbot } |
How to prevent space reallocation on a PV after PEs has been moved? Posted: 26 Mar 2021 07:44 PM PDT I want to remove a PV from a Volume Group on a system where a large amount of data is written to the disks all the time. I can use pvmove to move all of the PV's physical extents to another PV and I did this a few times on lightly loaded systems where only a small amount of data is written to disks. In those cases when I finished moving all of the physical extents to another PV, the original PV was completely free and it could be removed from the volume group. My concern is that on a heavily written volume when I finish pvmoving all of the physical extents, LVM will reallocate space on the currently freed PV because it is still part of the Volume Group. Is there a way to set the PV as readable, but not writable (or some kind of copy-on-write mode) so LVM won't try to write new data to the freed PV? In other words, can I move all PE-s from a PV to another PV and remove the freed up PV from the Volume Group in an atomic operation? |
Fast Google Cloud SQL Posted: 26 Mar 2021 05:03 PM PDT I have always set up machines to have a typical LAMP set up and recently I've been testing an external database set up with Google Cloud SQL. The performance hits I'm taking from it seem a bit unreasonable though. The average load speed of my website with a local database is 0.04s. With a connection to Cloud SQL over SSL the load speed is 0.4s. ~10x slower, so I investigated this and learned about connection pooling and how PHP doesn't support it, but ODBC does and that's what my server seems to be using, so that's good but ~10x slower is still bad. I tried setting up a Cloud SQL Proxy with the hope of that being a solution, but after setting it up earlier today it gave me a load speed of 5s. ~100x slower, so definitely not a solution. Is there something that I can do to get a near local database speed using the Google Cloud Platform, should I use some other service for better performance, or do I just have to take this performance hit and live with it? EDIT: More information about the system. The web server is a compute instance on Google Cloud Compute. Both the Compute instance and the SQL instance are in the same zone using g1-small (1 vCPU, 1.7 GB memory). It seems like the latency is about 1 millisecond. Comparison of query runtimes: Local db on the left, Remote db on the right Right now each of these queries is individually sent across the network, so using a multi query could save me a lot of time, assuming PHP actually sends all the query in a single network request. |
Deploy from Tomcat text interface fails Invalid context path null was specified Posted: 26 Mar 2021 07:04 PM PDT I have an apparently simple situation: a running Tomcat 7.0.54 instance whose host is configured with autoDeploy=false and unpackWars=true and I want to deploy a new web application there without restarting the server. From within the server machine I copied the webapp WAR file to the Tomcat appBase directory (webapps/ ), then from the HTML manager I can succesfully deploy it from the "Deploy directory or WAR file located on server" section by only filling the "Context path" form field and hitting "Deploy". I'm now trying to do the same from the command line, and while commands such as: curl -u admin:password1 http://localhost:8680/manager/text/list and: curl -u admin:password1 http://localhost:8680/manager/text/stop?path=/webapp Both give OK - [...] , I can't find any syntax for the /deploy command that would succeed, any attempt results in the following error: FAIL - Invalid context path null was specified According to the manager docs, the following syntax should do it: curl -u admin:password1 http://localhost:8680/manager/text/deploy?war=webapp.war But it doesn't, and even adding a &path=/webapp parameter doesn't yield a different result. Permissions on the webapps/ folder are already right: as other webapps already deployed and running, everything belongs to root . What am I missing? |
How can I get nginx-botsearch of Fail2Ban to match a string but also not match that same string if it has additional trailing characters? Posted: 26 Mar 2021 04:02 PM PDT System is Ubuntu 16.04. Using fail2ban from the package manager (which is currently 0.9.3-1). Have enabled nginx-botsearch in jail.local. Here is my /etc/fail2ban/filter.d/nginx-botsearch.local (note that nginx-botsearch depends on botsearch-common): [INCLUDES] before = botsearch-common.conf after = botsearch-common.local [Definition] failregex = ^<HOST> \- \S+ \[\] \"(GET|POST|HEAD) \/<block> \S+\" 404 .+$ ^ \[error\] \d+#\d+: \*\d+ (\S+ )?\"\S+\" (failed|is not found) \(2\: No such file or directory\), client\: <HOST>\, server\: \S*\, request: \"(GET|POST|HEAD) \/<block> \S+\"\, .*?$ ignoreregex = Here is my /etc/fail2ban/filter.d/botsearch-common.local: [Init] block = \/?(<webmail>|<phpmyadmin>|<wordpress>|cgi-bin|mysqladmin)[^,]* webmail = roundcube|(ext)?mail|horde|(v-?)?webmail phpmyadmin = (typo3/|xampp/|admin/|)(pma|(php)?[Mm]y[Aa]dmin) wordpress = wp-(login|signup)\.php So here's the problem. I want it to match "http://example.com/wp-login.php" or "http://example.com/folder/wp-login.php" and not "http://example.com/wp-login.phpasdfasdfasdf" or "http://example.com/wp-login.php?asdfasdfasdf". I have tried using $, \n, \b, \B and any number of other things on the end of the wordpress line to no avail. Please advise how this might be accomplished. |
network drops, dhcp renewal fails Posted: 26 Mar 2021 03:00 PM PDT I'm struggling with a dhcp problem at work and can't seem to find a solution. I'm not sure if you can help me but I thought I could give it a try, as I have the feeling I tried almost everything in my power aha! Here's the thing. There are those four computers, which are basically exactly the same as any other computer in our place. Every 4 hour, connection drops. It happens every day and started a few weeks ago. There wasn't any change on our side: still the same server, still the same switch. We even changed one computer by a new one and the problem still occurred. What I understand here is that dhcp client tries to renew the lease after, say, 2 hours and 3 hours: lease isn't renewed and at the end connection drops. A few seconds later, computer broadcasts a brand new request and gets a ip address. We know that: - Our dhcp lease lasts 4 hours.
- We have one dhcp server on this location.
- The dhcp pool has got PLENTY of addresses available.
At this point we tried different things, including setting a reservation in dhcp server and see what happens. I don't know yet if that solved our problem. Still, I would like to know what caused that. Any thoughts on my issue? Thanks! Update Sorry I didn't answer sooner. IP reservation didn't work, as I expected. Something during the renewal process fails at some point. I'm going to dig on those leads you gave me. I can look up on switches and see what is going on there, but my rights on dhcp server are much more limited. I will try to give you some update as soon as I can. Wireshark may help: I'll launch it this afternoon on a client, at about the time of expected disconnection, see what happens. Update 2 Hi guys. Sad to say we didn't find anything relevant. Users have had this problem for a few months and they already have been very (...) very patient. I really can't ask them to wait any longer. So, static ip address it is. That's the only workaround I have. I keep my fingers crossed hoping that this issue won't spread to other workstations. Thank you guys for your help! |
Exchange 2016 keeps giving away it's default Self-signed certificate instead of CA one Posted: 26 Mar 2021 05:03 PM PDT I've got Exchange 2016 server being prepared for it's prime time. But Outlook client, connected to mailbox on that server, pops out window saying that certificate issued by not trusted organization, more specifically - it's default self-signed certificate, which was created during Exchange installation. The problem is that I've created and installed proper SSL certificate with domain CA, assigned it to services and to IIS, but server keeps giving it's SS certificate for some reason. Output of Get-ExchangeCertificate | Format-List FriendlyName,Subject,CertificateDomains,Thumbprint,Services FriendlyName : CA Certificate for HTTPS Subject : CN=web.contoso.com, OU=IT, O=The Contoso, L=Almaty, S=Almaty, C=KZ CertificateDomains : {web.contoso.com, mail.contoso.com, AutoDiscover.contoso.com, bsb-srv-mb-exch.contoso.com, BSB-SRV-MB-EXCH, contoso.com} Thumbprint : 8-4 Services : IMAP, POP, IIS, SMTP FriendlyName : Microsoft Exchange Subject : CN=BSB-SRV-MB-EXCH CertificateDomains : {BSB-SRV-MB-EXCH, BSB-SRV-MB-EXCH.contoso.com} Thumbprint : 6-7 Services : IMAP, POP, SMTP FriendlyName : Microsoft Exchange Server Auth Certificate Subject : CN=Microsoft Exchange Server Auth Certificate CertificateDomains : {} Thumbprint : 8-6 Services : SMTP FriendlyName : WMSVC Subject : CN=WMSvc-BSB-SRV-MB-EXCH CertificateDomains : {WMSvc-BSB-SRV-MB-EXCH} Thumbprint : F-0 Services : None It also does very same thing when I use my browser to connect https to server - keeps warning me about SS certificate instead of CA one. How can I make it use proper certificate? |
What is the use of ProxyPassReverse Directive Posted: 26 Mar 2021 07:28 PM PDT Definition from apache.org says: This directive lets Apache httpd adjust the URL in the Location, Content-Location and URI headers on HTTP redirect responses. This is essential when Apache httpd is used as a reverse proxy (or gateway) to avoid bypassing the reverse proxy because of HTTP redirects on the backend servers which stay behind the reverse proxy. Only the HTTP response headers specifically mentioned above will be rewritten. Apache httpd will not rewrite other response headers, nor will it by default rewrite URL references inside HTML pages. This means that if the proxied content contains absolute URL references, they will bypass the proxy. To rewrite HTML content to match the proxy, you must load and enable mod_proxy_html. path is the name of a local virtual path; url is a partial URL for the remote server. These parameters are used the same way as for the ProxyPass directive. Can someone please explain me how it works. In general what does this directive do? |
nginx is sometimes slow - what can I optimize? Posted: 26 Mar 2021 06:04 PM PDT I have always different loading times of my website. Sometimes my website is very fast, but sometimes it's very slow (especially at the evening when there are the most visitors). Here are my configuration files: nginx.conf: user www-data; worker_processes 8; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { multi_accept on; worker_connections 8096; use epoll; } worker_rlimit_nofile 65535; http { include /etc/nginx/mime.types; default_type application/octet-stream; sendfile on; tcp_nopush on; server_tokens off; tcp_nodelay on; client_body_buffer_size 10K; client_header_buffer_size 1k; client_max_body_size 8m; large_client_header_buffers 2 1k; client_body_timeout 12; client_header_timeout 12; keepalive_timeout 15; send_timeout 10; gzip on; gzip_proxied any; gzip_min_length 1400; gzip_types text/plain application/javascript application/x-javascript text/javascript text/css text/xml application/xml application/rss+xml application/atom+xml application/rdf+xml; gzip_vary on; gzip_disable "MSIE [1-6]\."; gzip_http_version 1.1; gzip_buffers 16 8k; open_file_cache max=2000 inactive=20s; open_file_cache_valid 60s; open_file_cache_min_uses 5; open_file_cache_errors off; #CloudFlare set_real_ip_from 199.27.128.0/21; set_real_ip_from 173.245.48.0/20; set_real_ip_from 103.21.244.0/22; set_real_ip_from 103.22.200.0/22; set_real_ip_from 103.31.4.0/22; set_real_ip_from 141.101.64.0/18; set_real_ip_from 108.162.192.0/18; set_real_ip_from 190.93.240.0/20; set_real_ip_from 188.114.96.0/20; set_real_ip_from 197.234.240.0/22; set_real_ip_from 198.41.128.0/17; set_real_ip_from 162.158.0.0/15; set_real_ip_from 104.16.0.0/12; set_real_ip_from 172.64.0.0/13; set_real_ip_from 2400:cb00::/32; set_real_ip_from 2606:4700::/32; set_real_ip_from 2803:f800::/32; set_real_ip_from 2405:b500::/32; set_real_ip_from 2405:8100::/32; real_ip_header CF-Connecting-IP; include /etc/nginx/conf.d/*.conf; } default.conf: # non-www -> www server { listen 80; listen [::]:80; server_name example.com; return 301 http://www.example.com$request_uri; } # config server { listen 80; server_name www.example.com; root /home/example.com/html/; index index.php index.htm index.xml index.html; error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } location ~ (?U)\.php(/.*$|$) { #PATH_INFO fastcgi_split_path_info ^(.+?\.php)(/.*)$; if (!-f $document_root$fastcgi_script_name) { return 404; } fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_path_info; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_pass unix:/run/php/php7.0-fpm.sock; fastcgi_index index.php; fastcgi_buffer_size 128k; fastcgi_buffers 256 16k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; include fastcgi_params; } location ~* \.(jpg|jpeg|gif|png|ico|gz|svg|svgz|ttf|otf|woff|eot|js|css)$ { access_log off; log_not_found off; add_header Vary Accept-Encoding; add_header Pragma "public"; add_header Cache-Control "public, max-age=2592000"; } } php7.0-fpm: ; start a new pool named 'www'. ; the variable $pool can we used in any directive and will be replaced by the ; pool name ('www' here) [www] ; Per pool prefix ; It only applies on the following directives: ; - 'access.log' ; - 'slowlog' ; - 'listen' (unixsocket) ; - 'chroot' ; - 'chdir' ; - 'php_values' ; - 'php_admin_values' ; When not set, the global prefix (or /usr) applies instead. ; Note: This directive can also be relative to the global prefix. ; Default Value: none ;prefix = /path/to/pools/$pool ; Unix user/group of processes ; Note: The user is mandatory. If the group is not set, the default user's group ; will be used. user = www-data group = www-data ; The address on which to accept FastCGI requests. ; Valid syntaxes are: ; 'ip.add.re.ss:port' - to listen on a TCP socket to a specific IPv4 address on ; a specific port; ; '[ip:6:addr:ess]:port' - to listen on a TCP socket to a specific IPv6 address on ; a specific port; ; 'port' - to listen on a TCP socket to all addresses ; (IPv6 and IPv4-mapped) on a specific port; ; '/path/to/unix/socket' - to listen on a unix socket. ; Note: This value is mandatory. listen = /run/php/php7.0-fpm.sock ; Set listen(2) backlog. ; Default Value: 511 (-1 on FreeBSD and OpenBSD) ;listen.backlog = 511 ; Set permissions for unix socket, if one is used. In Linux, read/write ; permissions must be set in order to allow connections from a web server. Many ; BSD-derived systems allow connections regardless of permissions. ; Default Values: user and group are set as the running user ; mode is set to 0660 listen.owner = www-data listen.group = www-data ;listen.mode = 0660 ; When POSIX Access Control Lists are supported you can set them using ; these options, value is a comma separated list of user/group names. ; When set, listen.owner and listen.group are ignored ;listen.acl_users = ;listen.acl_groups = ; List of addresses (IPv4/IPv6) of FastCGI clients which are allowed to connect. ; Equivalent to the FCGI_WEB_SERVER_ADDRS environment variable in the original ; PHP FCGI (5.2.2+). Makes sense only with a tcp listening socket. Each address ; must be separated by a comma. If this value is left blank, connections will be ; accepted from any ip address. ; Default Value: any ;listen.allowed_clients = 127.0.0.1 ; Specify the nice(2) priority to apply to the pool processes (only if set) ; The value can vary from -19 (highest priority) to 20 (lower priority) ; Note: - It will only work if the FPM master process is launched as root ; - The pool processes will inherit the master process priority ; unless it specified otherwise ; Default Value: no set ; process.priority = -19 ; Choose how the process manager will control the number of child processes. ; Possible Values: ; static - a fixed number (pm.max_children) of child processes; ; dynamic - the number of child processes are set dynamically based on the ; following directives. With this process management, there will be ; always at least 1 children. ; pm.max_children - the maximum number of children that can ; be alive at the same time. ; pm.start_servers - the number of children created on startup. ; pm.min_spare_servers - the minimum number of children in 'idle' ; state (waiting to process). If the number ; of 'idle' processes is less than this ; number then some children will be created. ; pm.max_spare_servers - the maximum number of children in 'idle' ; state (waiting to process). If the number ; of 'idle' processes is greater than this ; number then some children will be killed. ; ondemand - no children are created at startup. Children will be forked when ; new requests will connect. The following parameter are used: ; pm.max_children - the maximum number of children that ; can be alive at the same time. ; pm.process_idle_timeout - The number of seconds after which ; an idle process will be killed. ; Note: This value is mandatory. pm = static ; The number of child processes to be created when pm is set to 'static' and the ; maximum number of child processes when pm is set to 'dynamic' or 'ondemand'. ; This value sets the limit on the number of simultaneous requests that will be ; served. Equivalent to the ApacheMaxClients directive with mpm_prefork. ; Equivalent to the PHP_FCGI_CHILDREN environment variable in the original PHP ; CGI. The below defaults are based on a server without much resources. Don't ; forget to tweak pm.* to fit your needs. ; Note: Used when pm is set to 'static', 'dynamic' or 'ondemand' ; Note: This value is mandatory. pm.max_children = 100 ; The number of child processes created on startup. ; Note: Used only when pm is set to 'dynamic' ; Default Value: min_spare_servers + (max_spare_servers - min_spare_servers) / 2 pm.start_servers = 5 ; The desired minimum number of idle server processes. ; Note: Used only when pm is set to 'dynamic' ; Note: Mandatory when pm is set to 'dynamic' pm.min_spare_servers = 4 ; The desired maximum number of idle server processes. ; Note: Used only when pm is set to 'dynamic' ; Note: Mandatory when pm is set to 'dynamic' pm.max_spare_servers = 100 ; The number of seconds after which an idle process will be killed. ; Note: Used only when pm is set to 'ondemand' ; Default Value: 10s ;pm.process_idle_timeout = 10s; ; The number of requests each child process should execute before respawning. ; This can be useful to work around memory leaks in 3rd party libraries. For ; endless request processing specify '0'. Equivalent to PHP_FCGI_MAX_REQUESTS. ; Default Value: 0 pm.max_requests = 5000 ; The URI to view the FPM status page. If this value is not set, no URI will be ; recognized as a status page. It shows the following informations: ; pool - the name of the pool; ; process manager - static, dynamic or ondemand; ; start time - the date and time FPM has started; ; start since - number of seconds since FPM has started; ; accepted conn - the number of request accepted by the pool; ; listen queue - the number of request in the queue of pending ; connections (see backlog in listen(2)); ; max listen queue - the maximum number of requests in the queue ; of pending connections since FPM has started; ; listen queue len - the size of the socket queue of pending connections; ; idle processes - the number of idle processes; ; active processes - the number of active processes; ; total processes - the number of idle + active processes; ; max active processes - the maximum number of active processes since FPM ; has started; ; max children reached - number of times, the process limit has been reached, ; when pm tries to start more children (works only for ; pm 'dynamic' and 'ondemand'); ; Value are updated in real time. ; Example output: ; pool: www ; process manager: static ; start time: 01/Jul/2011:17:53:49 +0200 ; start since: 62636 ; accepted conn: 190460 ; listen queue: 0 ; max listen queue: 1 ; listen queue len: 42 ; idle processes: 4 ; active processes: 11 ; total processes: 15 ; max active processes: 12 ; max children reached: 0 ; ; By default the status page output is formatted as text/plain. Passing either ; 'html', 'xml' or 'json' in the query string will return the corresponding ; output syntax. Example: ; http://www.foo.bar/status ; http://www.foo.bar/status?json ; http://www.foo.bar/status?html ; http://www.foo.bar/status?xml ; ; By default the status page only outputs short status. Passing 'full' in the ; query string will also return status for each pool process. ; Example: ; http://www.foo.bar/status?full ; http://www.foo.bar/status?json&full ; http://www.foo.bar/status?html&full ; http://www.foo.bar/status?xml&full ; The Full status returns for each process: ; pid - the PID of the process; ; state - the state of the process (Idle, Running, ...); ; start time - the date and time the process has started; ; start since - the number of seconds since the process has started; ; requests - the number of requests the process has served; ; request duration - the duration in µs of the requests; ; request method - the request method (GET, POST, ...); ; request URI - the request URI with the query string; ; content length - the content length of the request (only with POST); ; user - the user (PHP_AUTH_USER) (or '-' if not set); ; script - the main script called (or '-' if not set); ; last request cpu - the %cpu the last request consumed ; it's always 0 if the process is not in Idle state ; because CPU calculation is done when the request ; processing has terminated; ; last request memory - the max amount of memory the last request consumed ; it's always 0 if the process is not in Idle state ; because memory calculation is done when the request ; processing has terminated; ; If the process is in Idle state, then informations are related to the ; last request the process has served. Otherwise informations are related to ; the current request being served. ; Example output: ; ************************ ; pid: 31330 ; state: Running ; start time: 01/Jul/2011:17:53:49 +0200 ; start since: 63087 ; requests: 12808 ; request duration: 1250261 ; request method: GET ; request URI: /test_mem.php?N=10000 ; content length: 0 ; user: - ; script: /home/fat/web/docs/php/test_mem.php ; last request cpu: 0.00 ; last request memory: 0 ; ; Note: There is a real-time FPM status monitoring sample web page available ; It's available in: /usr/share/php/7.0/fpm/status.html ; ; Note: The value must start with a leading slash (/). The value can be ; anything, but it may not be a good idea to use the .php extension or it ; may conflict with a real PHP file. ; Default Value: not set ;pm.status_path = /status ; The ping URI to call the monitoring page of FPM. If this value is not set, no ; URI will be recognized as a ping page. This could be used to test from outside ; that FPM is alive and responding, or to ; - create a graph of FPM availability (rrd or such); ; - remove a server from a group if it is not responding (load balancing); ; - trigger alerts for the operating team (24/7). ; Note: The value must start with a leading slash (/). The value can be ; anything, but it may not be a good idea to use the .php extension or it ; may conflict with a real PHP file. ; Default Value: not set ;ping.path = /ping ; This directive may be used to customize the response of a ping request. The ; response is formatted as text/plain with a 200 response code. ; Default Value: pong ;ping.response = pong ; The access log file ; Default: not set ;access.log = log/$pool.access.log ; The access log format. ; The following syntax is allowed ; %%: the '%' character ; %C: %CPU used by the request ; it can accept the following format: ; - %{user}C for user CPU only ; - %{system}C for system CPU only ; - %{total}C for user + system CPU (default) ; %d: time taken to serve the request ; it can accept the following format: ; - %{seconds}d (default) ; - %{miliseconds}d ; - %{mili}d ; - %{microseconds}d ; - %{micro}d ; %e: an environment variable (same as $_ENV or $_SERVER) ; it must be associated with embraces to specify the name of the env ; variable. Some exemples: ; - server specifics like: %{REQUEST_METHOD}e or %{SERVER_PROTOCOL}e ; - HTTP headers like: %{HTTP_HOST}e or %{HTTP_USER_AGENT}e ; %f: script filename ; %l: content-length of the request (for POST request only) ; %m: request method ; %M: peak of memory allocated by PHP ; it can accept the following format: ; - %{bytes}M (default) ; - %{kilobytes}M ; - %{kilo}M ; - %{megabytes}M ; - %{mega}M ; %n: pool name ; %o: output header ; it must be associated with embraces to specify the name of the header: ; - %{Content-Type}o ; - %{X-Powered-By}o ; - %{Transfert-Encoding}o ; - .... ; %p: PID of the child that serviced the request ; %P: PID of the parent of the child that serviced the request ; %q: the query string ; %Q: the '?' character if query string exists ; %r: the request URI (without the query string, see %q and %Q) ; %R: remote IP address ; %s: status (response code) ; %t: server time the request was received ; it can accept a strftime(3) format: ; %d/%b/%Y:%H:%M:%S %z (default) ; The strftime(3) format must be encapsuled in a %{<strftime_format>}t tag ; e.g. for a ISO8601 formatted timestring, use: %{%Y-%m-%dT%H:%M:%S%z}t ; %T: time the log has been written (the request has finished) ; it can accept a strftime(3) format: ; %d/%b/%Y:%H:%M:%S %z (default) ; The strftime(3) format must be encapsuled in a %{<strftime_format>}t tag ; e.g. for a ISO8601 formatted timestring, use: %{%Y-%m-%dT%H:%M:%S%z}t ; %u: remote user ; ; Default: "%R - %u %t \"%m %r\" %s" ;access.format = "%R - %u %t \"%m %r%Q%q\" %s %f %{mili}d %{kilo}M %C%%" ; The log file for slow requests ; Default Value: not set ; Note: slowlog is mandatory if request_slowlog_timeout is set ;slowlog = log/$pool.log.slow ; The timeout for serving a single request after which a PHP backtrace will be ; dumped to the 'slowlog' file. A value of '0s' means 'off'. ; Available units: s(econds)(default), m(inutes), h(ours), or d(ays) ; Default Value: 0 ;request_slowlog_timeout = 0 ; The timeout for serving a single request after which the worker process will ; be killed. This option should be used when the 'max_execution_time' ini option ; does not stop script execution for some reason. A value of '0' means 'off'. ; Available units: s(econds)(default), m(inutes), h(ours), or d(ays) ; Default Value: 0 ;request_terminate_timeout = 0 ; Set open file descriptor rlimit. ; Default Value: system defined value ;rlimit_files = 1024 ; Set max core size rlimit. ; Possible Values: 'unlimited' or an integer greater or equal to 0 ; Default Value: system defined value ;rlimit_core = 0 ; Chroot to this directory at the start. This value must be defined as an ; absolute path. When this value is not set, chroot is not used. ; Note: you can prefix with '$prefix' to chroot to the pool prefix or one ; of its subdirectories. If the pool prefix is not set, the global prefix ; will be used instead. ; Note: chrooting is a great security feature and should be used whenever ; possible. However, all PHP paths will be relative to the chroot ; (error_log, sessions.save_path, ...). ; Default Value: not set ;chroot = ; Chdir to this directory at the start. ; Note: relative path can be used. ; Default Value: current directory or / when chroot ;chdir = /var/www ; Redirect worker stdout and stderr into main error log. If not set, stdout and ; stderr will be redirected to /dev/null according to FastCGI specs. ; Note: on highloaded environement, this can cause some delay in the page ; process time (several ms). ; Default Value: no ;catch_workers_output = yes ; Clear environment in FPM workers ; Prevents arbitrary environment variables from reaching FPM worker processes ; by clearing the environment in workers before env vars specified in this ; pool configuration are added. ; Setting to "no" will make all environment variables available to PHP code ; via getenv(), $_ENV and $_SERVER. ; Default Value: yes ;clear_env = no ; Limits the extensions of the main script FPM will allow to parse. This can ; prevent configuration mistakes on the web server side. You should only limit ; FPM to .php extensions to prevent malicious users to use other extensions to ; exectute php code. ; Note: set an empty value to allow all extensions. ; Default Value: .php ;security.limit_extensions = .php .php3 .php4 .php5 .php7 ; Pass environment variables like LD_LIBRARY_PATH. All $VARIABLEs are taken from ; the current environment. ; Default Value: clean env ;env[HOSTNAME] = $HOSTNAME ;env[PATH] = /usr/local/bin:/usr/bin:/bin ;env[TMP] = /tmp ;env[TMPDIR] = /tmp ;env[TEMP] = /tmp ; Additional php.ini defines, specific to this pool of workers. These settings ; overwrite the values previously defined in the php.ini. The directives are the ; same as the PHP SAPI: ; php_value/php_flag - you can set classic ini defines which can ; be overwritten from PHP call 'ini_set'. ; php_admin_value/php_admin_flag - these directives won't be overwritten by ; PHP call 'ini_set' ; For php_*flag, valid values are on, off, 1, 0, true, false, yes or no. ; Defining 'extension' will load the corresponding shared extension from ; extension_dir. Defining 'disable_functions' or 'disable_classes' will not ; overwrite previously defined php.ini values, but will append the new value ; instead. ; Note: path INI options can be relative and will be expanded with the prefix ; (pool, global or /usr) ; Default Value: nothing is defined by default except the values in php.ini and ; specified at startup with the -d argument ;php_admin_value[sendmail_path] = /usr/sbin/sendmail -t -i -f www@my.domain.com ;php_flag[display_errors] = off ;php_admin_value[error_log] = /var/log/fpm-php.www.log ;php_admin_flag[log_errors] = on ;php_admin_value[memory_limit] = 32M |
Why does bind allow TTL to be set record by records if different TTLs are not allowed within the same record set Posted: 26 Mar 2021 03:03 PM PDT Why does bind allow TTL to be set record by records if different TTLs are not allowed within the same record set? If i set the zone ttl using: $TTL 39600 And then set a record TTL using: @ 300 IN A 1.1.1.1 I get the warning in my logs: TTL set to prior TTL (300) This is because I have "Different TTLs for records within the same record set, this is not allowed" If this is not allowed, whats the point of being able to set TTL record by record? Thanks |
mysqldump without SQL_BIG_SELECTS Posted: 26 Mar 2021 08:04 PM PDT Trying to dump all databases for replication creation using mysqldump --all-databases --master-data --single-transaction > all_databases.sql Results in the following error mysqldump: Couldn't execute 'SELECT /*!40001 SQL_NO_CACHE */ * FROM xxxtable ': The SELECT would examine more than MAX_JOIN_SIZE rows; check your WHERE and use SET SQL_BIG_SELECTS=1 or SET MAX_JOIN_SIZE=# if the SELECT is okay (1104) Is there a way to ensure mysqldump works without having to restart the server and updating my.cnf in it? Definitely we wouldn't like to always enable big selects on the production server. Using MySQL 5.6 |
How many hours of SMART power on hours use should a new hard drive have? Posted: 26 Mar 2021 03:39 PM PDT Purchased a new HGST hard drive. Using smartctl on OS X the new drive shows 71 hours of power on hours Does this make sense or is this a sign of a refurbished drive? I can't imagine why the manufacturer would need it powered on for almost 3 days. I can confirm these values are not minutes or seconds (they have not changed) Model Family: Hitachi Ultrastar 7K4000 Device Model: Hitachi HUS724020ALE640 User Capacity: 2,000,398,934,016 bytes [2.00 TB] ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 9 Power_On_Hours 0x0012 100 100 000 Old_age Always - 71 |
having trouble setting up NFS server on centos 6.5 Posted: 26 Mar 2021 07:04 PM PDT Setup: Provider: Linode nfs server: a linode with centos 6.5 nfs client: a linode with centos 6.5 When I tried mounting mount.nfs: access denied by server while mounting x.x.x.x:/shared This is happening when I try to start nfs service both on client and server [shortfellow@li829-73 ~]$ sudo service nfs restart Shutting down NFS daemon: [ OK ] Shutting down NFS mountd: [ OK ] Shutting down RPC idmapd: [ OK ] FATAL: Module nfsd not found. FATAL: Error running install command for nfsd Starting NFS services: [ OK ] Starting NFS mountd: [ OK ] Starting NFS daemon: [ OK ] Starting RPC idmapd: [ OK ] I do not understand the problem. |
What is the easiest approach to get freeRDP installed onto Amazon Linux? Posted: 26 Mar 2021 09:03 PM PDT What is the easiest approach to get freeRDP installed onto Amazon Linux? I've tried using RPMForge but am not having luck. Getting errors that dependencies like libpulse.so.0 are missing. Would appreciate any suggestions. |
Squid Session User Authentication Posted: 26 Mar 2021 04:02 PM PDT I installed Squid Transparent with Session External helper [1]. Then I created a PHP login page and user can login with his UserName, Password. But the problem is that when user login via his correct info, other people can login with authorized user's IP. It means it start session for the IP and all people that know his IP can use the internet. How can I limit the user? Is there any acl in squid to bind the IP & Username? Or we have to use PHP Session to limit the session in browser? If yes how can I communicate from server to client to check that the session is active or no? [1] http://www.andybev.com/index.php/Setting_up_a_captive_portal_from_scratch_using_Debian Thanks for your help! |
Run a scheduled task as an unprivileged user remotely Posted: 26 Mar 2021 02:47 PM PDT I need to allow a group of unprivileged users to trigger a predefined scheduled tank on a Windows Server 2008 R2 host. I seem unable to find the respective rights to do so. Upon an attempt to connect to the remote Task Scheduler, the remote system just gives me the middle finger: Even when a user is logged on on interactively, I cannot figure out how I would grant her the necessary permissions to run a task. In the pre-2008-era, a .job file has been created in the %SYSTEMROOT%\SYSTEM32\Tasks folder, where you could manipulate ACLs and influence the task scheduler behavior. In 2008, there seems to be no similar facility. Note that I do not want to create additional tasks, I just want to run an existing one. |
Apache2 not sending installed SSL certificate Posted: 26 Mar 2021 06:04 PM PDT I've spent hours and hours researching this problem and before I do something drastic like redoing all of the relevant configs I thought I'd ask for help. I'm a student sys-admin at a college and we've been having a problem with a website we're hosting. Visiting the website gives a "security certificate not trusted warning". Viewing the certificate shows it is the default, self-signed server cert, not the one we purchased and that is supposed to get served. This problem first came to our attention when we tried to switch this domain from pointing to a "regular" site to a new Drupal site. Originally domain.flavor.name.edu and domain.name.edu both pointed to the same regular site. We wanted to keep domain.flavor.name.edu pointing to the old and point domain.name.edu to the new Drupal site, so I deleted the domain.name.edu.conf files out of vhosts.d. Understandably, the SSL errors came but since I had never seen any of our other sites with valid SSL I didn't think much of it. However, the boss insists that the SSL was working fine before. To backtrack I moved back the files that I had removed, but I don't think that solved the problem (sorry I'm a bit hazy here, it's been several weeks since this first happened, and the other sys-admin may have changed some things too). Anyway maybe that means the problem is really just with the .confs in vhosts.d since domain.name.edu is still pointing to the new Drupal site and not back to the old. I have done several restarts of apache, both graceful and regular restart. The server (running Gentoo) is set up with Name Based Virtual Hosts, all on the same IP. As I understand, we should be able to have multiple sites with different SSL certs through SNI. The error_log confirms that we have SNI set up (Init: Name-based SSL virtual hosts only work for ...). in /etc/apache2/vhosts.d/ there's: 00_default_vhost.conf 00_ssl_domain.name.edu.conf 05_default_ssl_vhost.conf blah blah more .confs I remember reading there can be some kind of conflicts if Apache reads the wrong .conf in vhosts.d first and it does whatever's there without looking further or something of the sort but I think the numbers are supposed to take care of that, order wise 00_ssl_domain.name.edu should come before the default. In 00_ssl_domain.name.edu.conf ... SSLCertificateFiles /etc/ssl/apache2/domain.name.edu.crt ... SSLCertificateKeyFile /etc/ssl/apache2/domain.name.edu.key ... SSLCertificateChainFile /etc/ssl/apache2/geotrust.crt ... Both the certificate and intermediate should be good, I even dug up the email from earlier this spring when we got the certificates and recopied them in. openssl verify -CAfile geotrust.crt domain.name.edu.crt returns OK. Maybe this is a Drupal problem, maybe I've botched something horribly, but any help would be so greatly appreciated. *disclaimer: Sorry about the long text and also I have only been at my post for a year, and only in any capacity since the beginning of this semester. The previous sys-admin who did everything here left this sem. So basically I didn't set up these servers and the apache install etc. Edit1:Testing on Windows 7 with Firefox 15, Chrome 22, and IE 9 all give the same result Edit2: Relevant vhosts.d 00_ssl_domain.name.edu.conf <IfDefine SSL> #<IfDefine SSL_DEFAULT_VHOST> <IfModule ssl_module> # see bug #178966 why this is in here # When we also provide SSL we have to listen to the HTTPS port # Note: Configurations that use IPv6 but not IPv4-mapped addresses need two # Listen directives: "Listen [::]:443" and "Listen 0.0.0.0:443" Listen 128.220.29.244:443 #Added so that the ServerName directive works NameVirtualHost 128.220.29.244:443 # Go ahead and accept connections for these vhosts # from non-SNI clients SSLStrictSNIVHostCheck off <VirtualHost 128.220.29.244:443> ServerName domain.name.edu #Include /etc/apache2/vhosts.d/default_vhost.include Include /etc/apache2/vhosts.d/domain.include <IfModule log_config_module> TransferLog /var/log/apache2/ssl_access_domain.name.edu </IfModule> ## SSL Engine Switch: # Enable/Disable SSL for this virtual host. SSLEngine on #SSLLog /var/log/apache2/ssl_engine_log LogLevel debug ## SSL Cipher Suite: # List the ciphers that the client is permitted to negotiate. # See the mod_ssl documentation for a complete list. SSLCipherSuite ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP:+eNULL ## Server Certificate: # Point SSLCertificateFile at a PEM encoded certificate. If the certificate # is encrypted, then you will be prompted for a pass phrase. Note that a # kill -HUP will prompt again. Keep in mind that if you have both an RSA # and a DSA certificate you can configure both in parallel (to also allow # the use of DSA ciphers, etc.) SSLCertificateFile /etc/ssl/apache2/domain.name.edu.crt ## Server Private Key: # If the key is not combined with the certificate, use this directive to # point at the key file. Keep in mind that if you've both a RSA and a DSA # private key you can configure both in parallel (to also allow the use of # DSA ciphers, etc.) SSLCertificateKeyFile /etc/ssl/apache2/domain.name.edu.key ## Server Certificate Chain: # Point SSLCertificateChainFile at a file containing the concatenation of # PEM encoded CA certificates which form the certificate chain for the # server certificate. Alternatively the referenced file can be the same as # SSLCertificateFile when the CA certificates are directly appended to the # server certificate for convinience. SSLCertificateChainFile /etc/ssl/apache2/geotrust.crt #SSLCertificateChainFile /etc/ssl/test-certs/geotrust.crt ## Certificate Authority (CA): # Set the CA certificate verification path where to find CA certificates # for client authentication or alternatively one huge file containing all # of them (file must be PEM encoded). # Note: Inside SSLCACertificatePath you need hash symlinks to point to the # certificate files. Use the provided Makefile to update the hash symlinks # after changes. #SSLCACertificatePath /etc/ssl/apache2/ssl.crt #SSLCACertificateFile /etc/ssl/apache2/ca-bundle.crt ## Certificate Revocation Lists (CRL): # Set the CA revocation path where to find CA CRLs for client authentication # or alternatively one huge file containing all of them (file must be PEM # encoded). # Note: Inside SSLCARevocationPath you need hash symlinks to point to the # certificate files. Use the provided Makefile to update the hash symlinks # after changes. #SSLCARevocationPath /etc/ssl/apache2/ssl.crl #SSLCARevocationFile /etc/ssl/apache2/ca-bundle.crl ## Client Authentication (Type): # Client certificate verification type and depth. Types are none, optional, # require and optional_no_ca. Depth is a number which specifies how deeply # to verify the certificate issuer chain before deciding the certificate is # not valid. #SSLVerifyClient require #SSLVerifyDepth 10 ## Access Control: # With SSLRequire you can do per-directory access control based on arbitrary # complex boolean expressions containing server variable checks and other # lookup directives. The syntax is a mixture between C and Perl. See the # mod_ssl documentation for more details. #<Location /> # #SSLRequire ( %{SSL_CIPHER} !~ m/^(EXP|NULL)/ \ # and %{SSL_CLIENT_S_DN_O} eq "Snake Oil, Ltd." \ # and %{SSL_CLIENT_S_DN_OU} in {"Staff", "CA", "Dev"} \ # and %{TIME_WDAY} >= 1 and %{TIME_WDAY} <= 5 \ # and %{TIME_HOUR} >= 8 and %{TIME_HOUR} <= 20 ) \ # or %{REMOTE_ADDR} =~ m/^192\.76\.162\.[0-9]+$/ #</Location> ## SSL Engine Options: # Set various options for the SSL engine. ## FakeBasicAuth: # Translate the client X.509 into a Basic Authorisation. This means that the # standard Auth/DBMAuth methods can be used for access control. The user # name is the `one line' version of the client's X.509 certificate. # Note that no password is obtained from the user. Every entry in the user # file needs this password: `xxj31ZMTZzkVA'. ## ExportCertData: # This exports two additional environment variables: SSL_CLIENT_CERT and # SSL_SERVER_CERT. These contain the PEM-encoded certificates of the server # (always existing) and the client (only existing when client # authentication is used). This can be used to import the certificates into # CGI scripts. ## StdEnvVars: # This exports the standard SSL/TLS related `SSL_*' environment variables. # Per default this exportation is switched off for performance reasons, # because the extraction step is an expensive operation and is usually # useless for serving static content. So one usually enables the exportation # for CGI and SSI requests only. ## StrictRequire: # This denies access when "SSLRequireSSL" or "SSLRequire" applied even under # a "Satisfy any" situation, i.e. when it applies access is denied and no # other module can change it. ## OptRenegotiate: # This enables optimized SSL connection renegotiation handling when SSL # directives are used in per-directory context. #SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire <FilesMatch "\.(cgi|shtml|phtml|php)$"> SSLOptions +StdEnvVars </FilesMatch> <Directory "/var/www/localhost/cgi-bin"> SSLOptions +StdEnvVars </Directory> ## SSL Protocol Adjustments: # The safe and default but still SSL/TLS standard compliant shutdown # approach is that mod_ssl sends the close notify alert but doesn't wait # for the close notify alert from client. When you need a different # shutdown approach you can use one of the following variables: ## ssl-unclean-shutdown: # This forces an unclean shutdown when the connection is closed, i.e. no # SSL close notify alert is send or allowed to received. This violates the # SSL/TLS standard but is needed for some brain-dead browsers. Use this when # you receive I/O errors because of the standard approach where mod_ssl # sends the close notify alert. ## ssl-accurate-shutdown: # This forces an accurate shutdown when the connection is closed, i.e. a # SSL close notify alert is send and mod_ssl waits for the close notify # alert of the client. This is 100% SSL/TLS standard compliant, but in # practice often causes hanging connections with brain-dead browsers. Use # this only for browsers where you know that their SSL implementation works # correctly. # Notice: Most problems of broken clients are also related to the HTTP # keep-alive facility, so you usually additionally want to disable # keep-alive for those clients, too. Use variable "nokeepalive" for this. # Similarly, one has to force some clients to use HTTP/1.0 to workaround SSLOptions +StdEnvVars </Directory> ## SSL Protocol Adjustments: # The safe and default but still SSL/TLS standard compliant shutdown # approach is that mod_ssl sends the close notify alert but doesn't wait # for the close notify alert from client. When you need a different # shutdown approach you can use one of the following variables: ## ssl-unclean-shutdown: # This forces an unclean shutdown when the connection is closed, i.e. no # SSL close notify alert is send or allowed to received. This violates the # SSL/TLS standard but is needed for some brain-dead browsers. Use this when # you receive I/O errors because of the standard approach where mod_ssl # sends the close notify alert. ## ssl-accurate-shutdown: # This forces an accurate shutdown when the connection is closed, i.e. a # SSL close notify alert is send and mod_ssl waits for the close notify # alert of the client. This is 100% SSL/TLS standard compliant, but in # practice often causes hanging connections with brain-dead browsers. Use # this only for browsers where you know that their SSL implementation works # correctly. # Notice: Most problems of broken clients are also related to the HTTP # keep-alive facility, so you usually additionally want to disable # keep-alive for those clients, too. Use variable "nokeepalive" for this. # Similarly, one has to force some clients to use HTTP/1.0 to workaround # their broken HTTP/1.1 implementation. Use variables "downgrade-1.0" and # "force-response-1.0" for this. <IfModule setenvif_module> BrowserMatch ".*MSIE.*" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 </IfModule> ## Per-Server Logging: # The home of a custom SSL log file. Use this when you want a compact # non-error SSL logfile on a virtual host basis. <IfModule log_config_module> CustomLog /var/log/apache2/ssl_request_log \ "%t %h %{SSL_PROTOCOL}x %{SSL_CIPHER}x \"%r\" %b" </IfModule> </VirtualHost> </IfModule> #</IfDefine> </IfDefine> # vim: ts=4 filetype=apache Edit 3: Output of apache2 -S [Thu Oct 25 11:02:02 2012] [warn] _default_ VirtualHost overlap on port 80, the first has precedence [Thu Oct 25 11:02:02 2012] [warn] _default_ VirtualHost overlap on port 80, the first has precedence [Thu Oct 25 11:02:02 2012] [warn] _default_ VirtualHost overlap on port 80, the first has precedence [Thu Oct 25 11:02:02 2012] [warn] _default_ VirtualHost overlap on port 80, the first has precedence [Thu Oct 25 11:02:02 2012] [warn] _default_ VirtualHost overlap on port 80, the first has precedence [Thu Oct 25 11:02:02 2012] [warn] _default_ VirtualHost overlap on port 80, the first has precedence [Thu Oct 25 11:02:02 2012] [warn] _default_ VirtualHost overlap on port 80, the first has precedence [Thu Oct 25 11:02:02 2012] [warn] _default_ VirtualHost overlap on port 80, the first has precedence [Thu Oct 25 11:02:02 2012] [warn] _default_ VirtualHost overlap on port 80, the first has precedence [Thu Oct 25 11:02:02 2012] [warn] _default_ VirtualHost overlap on port 80, the first has precedence VirtualHost configuration: wildcard NameVirtualHosts and _default_ servers: *:80 domain1.edu (/etc/apache2/vhosts.d/10_domain1.edu.conf:38) *:80 domain2.edu (/etc/apache2/vhosts.d/10_domain2.edu.conf:38) *:80 domain3.edu (/etc/apache2/vhosts.d/10_domain3.edu.conf:38) *:80 domain4.edu (/etc/apache2/vhosts.d/10_domain4.edu.conf:38) *:80 domain5.edu (/etc/apache2/vhosts.d/10_domain5.edu.conf:38) *:80 domain6.edu (/etc/apache2/vhosts.d/10_domain6.edu.conf:38) *:80 domain7.edu (/etc/apache2/vhosts.d/10_domain7.edu.conf:38) *:80 domain8.edu (/etc/apache2/vhosts.d/10_domain8.edu.conf:38) *:80 domain9.edu (/etc/apache2/vhosts.d/10_domain9.edu.conf:38) *:80 domain10.edu (/etc/apache2/vhosts.d/10_domain10.edu.conf:38) *:80 domain11.edu (/etc/apache2/vhosts.d/10_domain11.edu.conf:38) Syntax OK I don't have any problem accessing any sites, just the SSL errors |
Install and configure SSL with TeamCity using cert generated from a global CA Posted: 26 Mar 2021 03:00 PM PDT UPDATE: Updated to be more specific with what I'm dealing with. I thought I was on the right path before, but now things just feel more obscure. I have no experience with Apache/Tomcat/Whatever TeamCity is running under, pretty much only worked in IIS before, and I am having a hard time understanding how to install a SSL certificate to use TeamCity with https. I have an SSL Cert from a global CA but I am having a hard time with the instructions here and here. I imported my cert into a keystore and I configured my server.xml file to point at my keystore following the directions. However, when I go to my site it says there is a problem with my certificate. It seems like the instructions from the CA, the instructions from TeamCity, and the instructions from Apache (which team city links to) are all different. Can anyone help explain the steps I'm missing/skipping? Note, this is running on a Windows box if that makes a difference. |
Permission denied on files in a directory on a CIFS-mounted Windows share in Linux Posted: 26 Mar 2021 04:23 PM PDT I have two directories: c:\work\directory1 and c:\work\directory2 which are mounted under: /mnt/c-drive/ When I try to read any file under directory1 I can, but when I try to read any file in directory2 I receive a "Permission Denied" error. /mnt/c-drive/directory1 drwxrwxrwx 1 root root 0 2008-10-17 11:13 directory1 /mnt/c-drive/directory1/file1 -rwxrwSrwx 1 root root 257 2008-10-17 11:13 file1 /mnt/c-drive/directory2 drwxrwxrwx 1 root root 0 2009-07-20 10:42 directory2 /mnt/c-drive/directory2/file1 -rwxrwSrwx 1 root root 844 2009-07-20 10:42 file1 The Windows machine is running Windows XP Media Center Edition. The Linux is Fedora 10. When I right click on either of the two files or their parent directories their attributes appear identical. On files: Read Only -, Hidden -, file is ready for archiving +, for fast search +, compress -, encrypt - On directories: Read Only +, Hidden -, file is ready for archiving -, for fast search +, compress -, encrypt - If there's any other info I can give to help, let me know. Any help would be appreciated, thanks. Additional info: Mounted via: mount -t cifs //192.168.1.103/c /mnt/c-drive with no username/password CALCS Output: Directory 1: C:\work\directory1 BUILTIN\Users:F BUILTIN\Users:(OI)(CI)(IO)F Everyone:F Everyone:(OI)(CI)(IO)(special access:) STANDARD_RIGHTS_ALL DELETE READ_CONTROL WRITE_DAC WRITE_OWNER SYNCHRONIZE STANDARD_RIGHTS_REQUIRED GENERIC_READ GENERIC_WRITE GENERIC_ALL FILE_GENERIC_READ FILE_GENERIC_WRITE FILE_GENERIC_EXECUTE FILE_READ_DATA FILE_WRITE_DATA FILE_APPEND_DATA FILE_READ_EA FILE_WRITE_EA FILE_EXECUTE FILE_DELETE_CHILD FILE_READ_ATTRIBUTES FILE_WRITE_ATTRIBUTES BUILTIN\Administrators:F BUILTIN\Administrators:(OI)(CI)(IO)F NT AUTHORITY\SYSTEM:F NT AUTHORITY\SYSTEM:(OI)(CI)(IO)F E510\Rob:F CREATOR OWNER:(OI)(CI)(IO)F BUILTIN\Users:(OI)(CI)(IO)(special access:) GENERIC_READ GENERIC_EXECUTE BUILTIN\Users:(CI)(IO)(special access:) FILE_APPEND_DATA BUILTIN\Users:(CI)(IO)(special access:) FILE_WRITE_DATA Directory 2: C:\work\direcory2 BUILTIN\Users:F BUILTIN\Users: (OI)(CI)(IO)F Everyone:F Everyone:(OI)(CI)(IO)(special access:) STANDARD_RIGHTS_ALL DELETE READ_CONTROL WRITE_DAC WRITE_OWNER SYNCHRONIZE STANDARD_RIGHTS_REQUIRED GENERIC_READ GENERIC_WRITE GENERIC_ALL FILE_GENERIC_READ FILE_GENERIC_WRITE FILE_GENERIC_EXECUTE FILE_READ_DATA FILE_WRITE_DATA FILE_APPEND_DATA FILE_READ_EA FILE_WRITE_EA FILE_EXECUTE FILE_DELETE_CHILD FILE_READ_ATTRIBUTES FILE_WRITE_ATTRIBUTES BUILTIN\Administrators:F BUILTIN\Administrators:(OI)(CI)(IO)F NT AUTHORITY\SYSTEM:F NT AUTHORITY\SYSTEM:(OI)(CI)(IO)F E510\Rob:F CREATOR OWNER: (OI)(CI)(IO)F BUILTIN\Users: (OI)(CI)(IO)(special access:) GENERIC_READ GENERIC_EXECUTE BUILTIN\Users: (CI)(IO)(special access:) FILE_APPEND_DATA BUILTIN\Users: (CI)(IO)(special access:) FILE_WRITE_DATA Here are CACLS info for 2 individual files directory1\file1 BUILTIN\Users:F Everyone:F BUILTIN\Administrators:F NT AUTHORITY\SYSTEM:F E510\Rob:F directory2\file1 E510\Rob:F NT AUTHORITY\SYSTEM:F BUILTIN\Administrators:F So now that I see the premission differences. |
No comments:
Post a Comment