Sunday, October 31, 2021

Recent Questions - Server Fault

Recent Questions - Server Fault


APP_PROTECT failed to get compilation status

Posted: 31 Oct 2021 08:22 PM PDT

I have installed Nginx Plus and App Protect (provided by Nginx Plus and F5). I followed the config guide (https://docs.nginx.com/nginx-app-protect/configuration/). The issue is whenever I add the lines

app_protect_enable on;  /etc/app_protect/conf/NginxDefaultPolicy.json";  

In nginx.conf (as shown in the config guide link above), I am getting a weird error which says APP_PROTECT failed to get compilation status.

Nginx error log shows this:

2021/11/01 02:37:16 [notice] 5967#5967: APP_PROTECT { "event": "configuration_load_start", "configSetFile": "/opt/app_protect/config/config_set.json" }  2021/11/01 02:37:16 [error] 5967#5967: APP_PROTECT failed to get compilation status  

Please help if anyone else is facing the same problem. Emailed to nginx plus support and I am yet to receive a reply from them (48 hours have passed since emailing them)

Port Forwarding not working (ZTE F660)

Posted: 31 Oct 2021 08:05 PM PDT

I have been trying to run a minecraft server and followed all the steps on port forwarding but it still doesn't work.

setting it up

error message

Please help I have been working on this for quite a while.

Why can't I use AWS EC2 ImageBuilder to create a RHEL based container?

Posted: 31 Oct 2021 07:46 PM PDT

In the AWS console for EC2 ImageBuilder the option to create a Container Recipe using RHEL as the base image seems to be disabled.

From EC2 Image Builder console -> Container Recipes -> Create container recipe. The 'Base Image' section for 'Image Operating System' allows Amazon Linux, Windows, Ubuntu and CentOS to be selected. Also listed are 'Red Hat Enterprise Linux (RHEL)' and 'SUSE Linux Enterprise Server (SLES)', but both of these are disabled.

I want to use RHEL as the base for my container. Is there something I need to do in my AWS account to make these operating systems selectable?

The console itself says that RHEL is supported, and I can't locate any documentation to say otherwise.

Setting static IP manually works - Using ansible gives me issues

Posted: 31 Oct 2021 07:14 PM PDT

so I have a couple raspberry pi's I'm trying to use as a cluster and I'm learning ansible to try and manage them easier. I'm running into an issue though. I can manually set the IP static using netctl but when I try to do it with ansible using the same exact commands I have issues. Also a weird note, the version that doesnt work, doesnt work on raspberry pi 4's but will work on raspberry pi b's.

For example if I use the following (enter it in manually) I get no issues what so ever:

/etc/netctl/eth0

Description='Static IP for cluster'  Interface=eth0  Connection=ethernet  IP=static  Address=('192.168.1.173/24')  #Routes=('192.168.0.0/24 via 192.168.1.2')  Gateway='192.168.1.1'  DNS=('192.168.1.1')  

netctl enable eth0

systemctl stop dhcpcd

systemctl stop dhcpcd

and after reboot it works fine.

I can also get it to work if I use the following:

- name: copy static IP file    block:      - name: create netctl file        raw: echo $'Description=\'A basic static ethernet connection\'\nInterface=eth0\nConnection=ethernet\nIP=static\nAddress=(\'{{ host_ip_addr }}/24\')\n#Routes=(\'192.168.0.0/24 via 192.168.1.2\')\nGateway=\'192.168.1.1\'\nDNS=(\'192.168.1.1\')' > /etc/netctl/eth0        args:          executable: /bin/bash        - name: chmod netctl file        raw: chmod 644 /etc/netctl/eth0        args:          executable: /bin/bash      - name: start and enable netctl    block:      - name: enable eth0 in netctl        raw: netctl enable eth0        args:          executable: /bin/bash         register: net          - name: stop dhcpcd         raw: systemctl stop dhcpcd;        args:          executable: /bin/bash         register: net2      - name: disable dhcpcd         raw: systemctl disable dhcpcd;        args:          executable: /bin/bash        register: net3    

But it fails to work if I use:

- name: setup static IP    template:      src: staticIP-netctl.j2      dest: /etc/netctl/eth0      owner: root       group: root      mode: 0644    - name: start and enable netctl    block:      - name: enable eth0 in netctl        service:           name: netctl          state: started          enabled: yes        - name: stop and disable dhcpcd (dynamic IP addresses)        service:           name: dhcpcd          state: stopped          enabled: no        

or this also fails

- name: setup static IP    template:      src: staticIP-netctl.j2      dest: /etc/netctl/eth0      owner: root       group: root      mode: 0644    - name: start and enable netctl    block:      - name: enable eth0 in netctl        raw: netctl enable eth0        args:          executable: /bin/bash         - name: stop and disable dhcpcd (dynamic IP addresses)        raw: systemctl stop dhcpcd        args:          executable: /bin/bash            - name: stop and disable dhcpcd 2 (dynamic IP addresses)        raw: systemctl disable dhcpcd        args:          executable: /bin/bash   

my staticIP-netctl.j2 file is:

Description='A basic static ethernet connection'  Interface=eth0  Connection=ethernet  IP=static  Address=('{{ host_ip_addr }}/24')  #Routes=('192.168.0.0/24 via 192.168.1.2')  Gateway='192.168.1.1'  DNS=('192.168.1.1')  

and it's in the roles/role/templates folder, it's also being copied over correctly as I have checked manually on each pi.

Any ideas why this may be happening?

Where are these printers coming from in "Devices and Printers?"

Posted: 31 Oct 2021 08:42 PM PDT

EDIT: I think it is coming from HKEY_USERS\.DEFAULT\Printers\ConvertUserDevModesCount. I see all of the original and new printer connections listed in here as well as a ton of repeating \\CSR|<ServerName>\{<long GUID>} entries. I found this article in a round about way which led me to look in this area.


I have a set of printers which deploy to computers via GPO. Today, I tried to change that printer mapping. The new printer mappings show up on the workstation, but the old ones are still being displayed.

However, prior to login, the user profile does not exist on the computer. Nothing in C:\Users, nothing in Advanced System Settings. Yes, I have a lot of computers to test on. Even if I remove the GPO which deploys printers, the original printers continue to show up in the "Devices and Printers" window.

If I delete the user from AD, and re-add a new user with same username and password, the original/old printers no longer show up.

Additionally, if I use powershell's get-printer or wmic printer list brief these original/old printers do NOT show up. They also do NOT show up in the registry under HKCU\Printers\Connections but ALL of the proper / new printer mappings do. Yet, these old connections continue to show in the 'Devices and Printers' window. And, they continue to work properly.

These are hybrid Azure AD joined PCs. But, we do not have AD premium and there is no enterprise state roaming configured. We are not using roaming profiles. We are not redirecting folders to any network shares. No other settings seem to roam or appear. Files saved are gone. This seems to effect all or several users on the same machines including a 'guest' user who's profile is 'temporary' and deleted on every logoff.

HOW are these printers continuing to appear on computers that the user does not have a profile on and no GPO or script is deploying. Why do they show only in 'Devices and Printers' but not in wmic, powershell, or the registry? The user logged on to this computer and others in the past within our organization.

Configuring SSSD to do SSH SSO using Active directory

Posted: 31 Oct 2021 06:12 PM PDT

I am currently thinking about a solution to be able to SSH via kerberos using SSSD linked to an active directory without joining the machine to the domain.

The main constraint is not to join the machine to the AD domain. I would like to know if you have already tried this solution and if it is possible. I am not very familiar with the use of kerberos for SSO services.

Currently I am working on a Centos 7, I have already set up an AD and configured SSSD to connect via SSH through the AD accounts. I would now like to be able to use kerberos tickets so that I can pivot from machine to machine in ssh with a single ticket using the AD accounts.

Thank you in advance for your answers !

Unable to resolve host domain name

Posted: 31 Oct 2021 06:06 PM PDT

Recently, I noticed that time to time my client is unable to resolve my domain name. I have a Lightsail instance with a static IP, a Lightsail DNS Zone, and finally a Route53 domain name.

The only thing I did for now is adding the Lightsail DNS Zone name servers to the Route53 registered domain name servers list (as described here). I currently don't have hosted zone on Route53.

I have three questions. First, is the other way around more scalable? meaning, should I have a hosted zone on Route53 pointing Lightsail static IP (like this). Second, Is there a domain name access quota that I'm not aware of? and finally, Is there something flagrant that I'm doing wrong (I'm a newbie in networking).

Edit

Domain name: yimaru.services

Static IP address: 3.121.169.168

How can I allow devices on two subnets from one ISP to communicate with each other?

Posted: 31 Oct 2021 04:31 PM PDT

First, some context:

I run a small business out of my home. My bedroom serves as the "office" area, and that's where I'm at most of my time. My ISP (Frontier) leads into their Arris modem in the living room that is also running a LAN (10.0.0.x), the modem is located at 10.0.0.1). I have another router (TP-Link AX1500, 10.0.1.x) in my bedroom that's plugged into the Arris modem at 10.0.0.10. Basically, I want every device from both (subnets? Hopefully I'm using that word correctly) to be able to communicate with any other device in the network as a whole. Currently, I can send a query from any device on the TP-Link router to any other device in the house (and get a response), but I can't send a query from a device on the Arris network to any specific device on the TP-Link network (unless I port-forward a specific device via the TP-Link router ahead of time and just ping the router itself).

My goal is to allow any device in the house to communicate with any other device in the house, as if they were all hooked up to the same router (I'd like to keep my local IP addresses better organized, so that's why I have them on separate subnets).


Here's a diagram, hopefully it conveys my network setup well enough. The (...) means there are more devices connected to its parent, but aren't necessarily relevant to the question.

[ISP]   |   \--- [ARRIS Modem/Router (10.0.0.1)] - PUBLIC NETWORK (LIVING ROOM)         |         |\-- [Unmanaged Network Switch]         |     |         |     |\-- [Linux Desktop (10.0.0.15)]         |     |         |     \--- (...gaming consoles...)         |         |\-- (...phones, laptops, friends' devices...)         |         \--- [TP-Link AX1500 (10.0.0.10 / 10.0.1.1)] - BUSINESS NETWORK (BEDROOM)               |               |\-- [Win10 PC (10.0.1.100)] - [Currently port-forwarded on 80               |                               & 443 so I can have a website up]               |               \--- [Unmanaged Network Switch]                     |                     |\-- [Linux Desktop (10.0.1.10)]                     |                     \--- (...printer, hue bridge...)  

Like I said, I'd like to have all of my business devices on a different subnet (10.0.1.x) than the rest of the clients in the house (10.0.0.x) while still having completely open communication between them. I'm okay with switching some routers around (or even getting another router or another network switch or something) if that's necessary.

Currently, every device on the 10.0.1.x network is able to initiate a connection to 10.0.0.x devices. For example, pinging 10.0.0.15 from 10.0.1.100 actually reaches the 10.0.0.15 client (and I verified this by having an HTTP server running on every device and using curl), but pinging 10.0.1.100 from 10.0.0.15 returns From [my public IP address] icmp_seq=1 Destination Net Unreachable. What's going on here?

What I discovered was that when I pinged 10.0.0.15 from 10.0.1.100, the request was coming from the TP-Link router itself (10.0.0.10), not the client on its own subnet (like 10.0.1.100).


Let me know if I should make a separate thread for this next question, but I also would really like to know why only the devices on 10.0.0.x can find each other via hostname; my devices used to be able to do that on the 10.0.1.x network, but they suddenly quit being able to. I now have to use each device's IP address in order to communicate with it.

Thank you in advance, and my deepest apologies if this question exists elsewhere (might be a duplicate of One ISP, two switches, two subnets, but I really can't tell), it's just such a specific situation that I couldn't really tell what the issue would narrow down to, and therefore didn't really know what to search for.

Nick W.

Measure traffic for an interface monthly

Posted: 31 Oct 2021 03:51 PM PDT

I am running a small server in a remote home and I have a simcard with very limited data usage. I would like to have a file that measures cumulatively month by month (starting from the 27th of the month) the data consumption (MB) for a specific interface. I have tried different tools such as sysstat and vnstat and sar, but I have not been able to produce a binary file with the simple information of how many MB were used from - let's say - feb 27th to mar 26th (which is when the carrier starts over my data plan). Every month the file should be overwritten. I am running Debian 11.

Apache Redirect for HTTPS (Nextcloud) leads to redirect loop

Posted: 31 Oct 2021 03:49 PM PDT

How can I correctly configure Nextcloud and Apache, to have correct URL redirection?

I have configured Apache for redirection of HTTP to HTTPS, using a simple Redirect directive:

<VirtualHost *:80>      ServerName "example.com"      Redirect permanent "/" "https://example.com/"  </VirtualHost>  # *:80    <VirtualHost *:443>      ServerName "example.com"      ServerAdmin "webmaster@example.com"        SSLEngine On      SSLCertificateFile "/etc/ssl/certs/example.com/server.cert.fullchain.pem"      SSLCertificateKeyFile "/etc/ssl/private/example.private-key.pem"        Alias "/nextcloud" "/srv/nextcloud/html"      DocumentRoot "/srv/nextcloud/html"        <Directory "/srv/nextcloud/html">          Require all granted          Options +FollowSymlinks          AllowOverride all          # …      </Directory>  # /srv/nextcloud/html    </VirtualHost>  # *:443  

NextCloud configuration specifies that it should (via automatically generated .htaccess file) rewrite URIs to drop the PHP module filename:

<?php  $CONFIG = array (    // …    'trusted_domains' => array (      0 => 'example.com',    ),    'overwrite.cli.url' => 'https://example.com/nextcloud',    'htaccess.RewriteBase' => '/nextcloud',    // …  ?>  

The server fails to redirect, instead getting into a redirect loop. With LogLevel debug I see these error messages:

[Mon Nov 01 06:42:46.246002 2021] [ssl:info] [pid 68035] [client 198.51.100.38:55158] AH01964: Connection to child 7 established (server example.com:443)  [Mon Nov 01 06:42:46.246850 2021] [ssl:debug] [pid 68035] ssl_engine_kernel.c(2393): [client 198.51.100.38:55158] AH02043: SSL virtual host for servername example.com found  [Mon Nov 01 06:42:46.247069 2021] [core:debug] [pid 68035] protocol.c(2428): [client 198.51.100.38:55158] AH03155: select protocol from , choices=h2,http/1.1 for server example.com  [Mon Nov 01 06:42:46.365492 2021] [ssl:debug] [pid 68035] ssl_engine_kernel.c(2252): [client 198.51.100.38:55158] AH02041: Protocol: TLSv1.3, Cipher: TLS_AES_128_GCM_SHA256 (128/128 bits)  [Mon Nov 01 06:42:46.365893 2021] [socache_shmcb:debug] [pid 68035] mod_socache_shmcb.c(508): AH00831: socache_shmcb_store (0x01 -> subcache 1)  [Mon Nov 01 06:42:46.366041 2021] [socache_shmcb:debug] [pid 68035] mod_socache_shmcb.c(745): AH00842: expiring 1 and reclaiming 0 removed socache entries  [Mon Nov 01 06:42:46.366168 2021] [socache_shmcb:debug] [pid 68035] mod_socache_shmcb.c(765): AH00843: we now have 0 socache entries  [Mon Nov 01 06:42:46.366270 2021] [socache_shmcb:debug] [pid 68035] mod_socache_shmcb.c(862): AH00847: insert happened at idx=0, data=(0:32)  [Mon Nov 01 06:42:46.366369 2021] [socache_shmcb:debug] [pid 68035] mod_socache_shmcb.c(865): AH00848: finished insert, subcache: idx_pos/idx_used=0/1, data_pos/data_used=0/207  [Mon Nov 01 06:42:46.366466 2021] [socache_shmcb:debug] [pid 68035] mod_socache_shmcb.c(530): AH00834: leaving socache_shmcb_store successfully  [Mon Nov 01 06:42:46.370419 2021] [ssl:debug] [pid 68035] ssl_engine_kernel.c(415): [client 198.51.100.38:55158] AH02034: Initial (No.1) HTTPS request received for child 7 (server example.com:443)  [Mon Nov 01 06:42:46.371270 2021] [authz_core:debug] [pid 68035] mod_authz_core.c(815): [client 198.51.100.38:55158] AH01626: authorization result of Require all granted: granted  [Mon Nov 01 06:42:46.371449 2021] [authz_core:debug] [pid 68035] mod_authz_core.c(815): [client 198.51.100.38:55158] AH01626: authorization result of <RequireAny>: granted  [Mon Nov 01 06:42:46.371837 2021] [core:info] [pid 68035] [client 198.51.100.38:55158] AH00128: File does not exist: /srv/nextcloud/html/favicon.ico  [Mon Nov 01 06:42:46.372023 2021] [authz_core:debug] [pid 68035] mod_authz_core.c(815): [client 198.51.100.38:55158] AH01626: authorization result of Require all granted: granted  [Mon Nov 01 06:42:46.372108 2021] [authz_core:debug] [pid 68035] mod_authz_core.c(815): [client 198.51.100.38:55158] AH01626: authorization result of <RequireAny>: granted  [Mon Nov 01 06:42:46.373282 2021] [core:error] [pid 68035] [client 198.51.100.38:55158] AH00124: Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace.  [Mon Nov 01 06:42:46.373383 2021] [core:debug] [pid 68035] core.c(3947): [client 198.51.100.38:55158] AH00121: r->uri = /nextcloud/index.php  [Mon Nov 01 06:42:46.373461 2021] [core:debug] [pid 68035] core.c(3953): [client 198.51.100.38:55158] AH00122: redirected from r->uri = /nextcloud/index.php  [Mon Nov 01 06:42:46.373535 2021] [core:debug] [pid 68035] core.c(3953): [client 198.51.100.38:55158] AH00122: redirected from r->uri = /nextcloud/index.php  [Mon Nov 01 06:42:46.373608 2021] [core:debug] [pid 68035] core.c(3953): [client 198.51.100.38:55158] AH00122: redirected from r->uri = /nextcloud/index.php  [Mon Nov 01 06:42:46.373680 2021] [core:debug] [pid 68035] core.c(3953): [client 198.51.100.38:55158] AH00122: redirected from r->uri = /nextcloud/index.php  [Mon Nov 01 06:42:46.373754 2021] [core:debug] [pid 68035] core.c(3953): [client 198.51.100.38:55158] AH00122: redirected from r->uri = /nextcloud/index.php  [Mon Nov 01 06:42:46.373826 2021] [core:debug] [pid 68035] core.c(3953): [client 198.51.100.38:55158] AH00122: redirected from r->uri = /nextcloud/index.php  [Mon Nov 01 06:42:46.373898 2021] [core:debug] [pid 68035] core.c(3953): [client 198.51.100.38:55158] AH00122: redirected from r->uri = /nextcloud/index.php  [Mon Nov 01 06:42:46.373971 2021] [core:debug] [pid 68035] core.c(3953): [client 198.51.100.38:55158] AH00122: redirected from r->uri = /nextcloud/index.php  [Mon Nov 01 06:42:46.374044 2021] [core:debug] [pid 68035] core.c(3953): [client 198.51.100.38:55158] AH00122: redirected from r->uri = /nextcloud/  [Mon Nov 01 06:42:46.374116 2021] [core:debug] [pid 68035] core.c(3953): [client 198.51.100.38:55158] AH00122: redirected from r->uri = /favicon.ico  [Mon Nov 01 06:42:46.374214 2021] [headers:debug] [pid 68035] mod_headers.c(890): AH01503: headers: ap_headers_error_filter()  

The intention is to use Nextcloud's configuration to rewrite its URLs nicely, and to use Apache Redirect to redirect HTTP requests to the equivalent HTTPS. What is wrong here, and how do I achieve this correctly?

Prevent SlowLoris attack with ModSecurity (Apache)

Posted: 31 Oct 2021 03:40 PM PDT

I'm unable to stop a SlowLoris attack using ModSecurity in my apache (2.4) server from a computer that is in the same network.

I'm on Debian 11.

I add this to the /etc/modsecurity/modsecurity.conf :

SecConnReadStateLimit 5

And set this to On: SecRuleEngine On

I'm using this to execute the attack: slowhttptest -H -c 1000 -i 1 -r 200 -x 24 -p 5 -t GET -u http://10.11.48.76:80

And yes I do: systemctl restart apache2

My Domain keeps disconnecting from Droplet IP

Posted: 31 Oct 2021 03:43 PM PDT

I recently completed this tutorial on setting up multiple wordpress servers on nginx https://www.youtube.com/watch?v=P7W4iYkFaOU&t=168s I have 1 Domain and 4 subdomains connected each with their own server blocks. The issue I'm having is that the domain and subdomains seem to keep disconnecting from the ip address turn off and on every 10 minutes. I've tested it just using the ip address instead of the domain name and it works. My cpu, memory and bandwidth usage definitely fine. Not sure what the issue is.
My Digital Ocean DNS Setup
My NameCheap Basic DNS Setup

Combining Network Connections for Additive Speed

Posted: 31 Oct 2021 06:44 PM PDT

Edit: I've removed the errors I was receiving while starting the bond by using the teamd utility. However, my goal to increase the total speed by combining the networks is still open. Skip down to EDIT2 below if interested. I may delete in between soon, because it is an artifact of using the 'interfaces' config and commands like iface that have been depreciated, at least in Ubuntu.

I narrowed down errors in starting a bond0 to some circular logic. I'm trying to use bond-mode balance-rr to add together my cell phone connection to my other cellular modem with ethernet for increased speed. Defaulting to the latter when the phone is not tethered. I'm using systemctl restart networking on Kubuntu 20.04 to trigger the changes in /etc/network/interfaces. (Speedify and Connectify do this type of connection bonding).

Edit: https://www.ibm.com/docs/en/linux-on-systems?topic=recommendations-bonding-modes

Quora question maybe clarify the terms used for L2 load balancing as 'link aggregation': https://www.quora.com/How-is-load-balancing-achieved-with-layer-2-devices

"Link aggregation (which is interchangable with the term "etherchannel" which I will use from here on out) is load balancing on layer 2. It's less about optimization, and more about spreading the load as equally as possible across each individual link."

EDIT2:

It looks like 'network teaming' with teamd may work. Yes this prevents any errors from the interfaces config file, while still bonding the networks with different bonding modes including load balancing.

Load balancing multiple NICs on single machine presenting a virtual IP

There may be difficulty in combining networks for speed. Failover and load balancing seem to be switching between networks based on which is more available, but that doesn't combine them additively. A given process is looking to a single IP address at a time to reassemble packet streams. I would need something that requests packets over two different networks and reassembles the streams, as in 'redundant routing'.

Some kind of VPN may be required for that, similar to what Speedify does. However, a local VPN would be more ideal. If they use physical devices to combine the networks, virtual devices might be able to simulate them.

https://networklessons.com/cisco/ccie-routing-switching/introduction-gateway-redundancy

Ansible is it possible to use variable in template src

Posted: 31 Oct 2021 06:00 PM PDT

In ansible we are trying to access different templates based on variable.

We have following template files like:

templates      app1.conf.j2      app2.conf.j2      app3.conf.j2    taks      app.yml  

In tasks we need to copy template file based on the app name. for eg: we will specify a variable named "instance_name" to either app1 or app2 or app3.

Now based on the variable we need to copy the app file to /opt/(( instance_name }}/conf.d/.

we created ansbile task as follows but its not working.

- name: 'Copy {{ instance_name }} file to /opt/conf.d/ Directory'      template:        src: "{{ instance_name }}.conf.j2"        dest: "/opt/{{ instance_name }}/conf.d/"        owner: root        group: root        mode: 0644  

     

When we hard code "src" to app1.conf.j2 its working for app1.

From this url https://docs.ansible.com/ansible/latest/modules/template_module.html#parameter-src it specifies value can be a relative or an absolute path.

Please let us know is it possible with this method? We are having around 20 apps and whats the best method to simplify the ansible playbook to specify only the variable.

How to automap shared mailbox **without** granting Full Access in Office365?

Posted: 31 Oct 2021 04:12 PM PDT

I want to establish a shared mailbox for a project team. I want this mailbox to be auto-mapped into the team members' Outlook profiles. However, I do not want them to have Full Access, so I can still control access permissions on individual folders inside that mailbox - for instance, to hide all the superfluous default folders they won't need, but also to have different folder permissions for project leads and mere stakeholders.

For test purposes I already solved this on our on-premise Exchange Server: Simply entering the DNs of the team members into the shared mailbox's msExchDelegateListLink attribute (via ADSIEdit) does the trick nicely and so far I haven't discovered any downsides to that approach. However, as far as I can tell there is no way to access that attribute (or any attributes for that matter) in an Office365 environment.... or is there?

I feel I must be missing something essential here: Why exactly is auto-mapping tied to Full Access in the first place? Is my use case really that outlandish? Are there other approaches for this that I simply haven't thought of?

Postfix - block by sender email domain ip

Posted: 31 Oct 2021 07:06 PM PDT

For some time now I receive a lot of spam emails. The emails are all different but if I lookup the domain of the email address it always resolves to the same IP address.

So:

  • xyz@domain1.tld -> resolves to 80.249.161.131
  • ddfda@domain2.tld -> resolves to 80.249.161.131
  • etc.

In postfix I can reject each email address but in this case it is not helpful because the email address changes all the time.

The next problem I have is that each email is send through a different mail server. So I cannot block by sender address.

What I would like to do is block an email by the ip address. Not that of the sender but of the ip address of the domain used as part of the email.

Any suggestions on how this is done in postfix?

How to redirect from one subfolder to a subsubfolder with htaccess

Posted: 31 Oct 2021 05:02 PM PDT

I have this folder structure:

/fonts    /myfont.eot    /myfont.svg    /myfont.ttf    /myfont.woff    /myfont.woff2  /content    /page1      /files        /logo.png        /style.css      /index.html    /page2      /files        /logo.png        /style.css      /index.html    /page3      /files        /logo.png        /style.css      /a        /index.html      /b        /index.html    ...  

The URLs one would call look like this:

  • example.com/content/page1
  • example.com/content/page2
  • example.com/content/page3/a
  • example.com/content/page3/b

Now all I want to achieve with an .htaccess file located in /page3 is that whoever visits example.com/content/page3 is properly redirect to example.com/content/page3/a (or example.com/content/page3/a/index.html, I don't mind whether the file name is in the URL or not).

I tried

DirectoryIndex /content/page3/a/index.html  

but in this case when I open example.com/content/page3 all relative references in the /a/index.html file are broken because of the missing directory level in the URL. Furthermore, while calling example.com/content/page3/a works, example.com/content/page3/b gives 403 Forbidden.

I tried

Redirect 301 /content/page3 /content/page3/a  

but this obviously results in an endless redirect spiral to example.com/content/page3/a/a/a/a/a/a/...... until the server stops trying.

So I figured I need some RedirectCond and RedirectRule configuration. Unfortunately, I don't understand the syntax, and all examples I looked at are doing it on the top-level with more complex stuff like redirecting files and sub-folders, sometimes off to another domain etc.

I tried this

RewriteEngine On    RewriteCond %{HTTP_HOST} ^(www\.)?example\.com$  RewriteCond %{REQUEST_URI} ^/content/page3/$  RewriteRule ^/content/page3/?$ /content/page3/a [L]  

because I figured this would replace "/content/page3" with "/content/page3/a", but to no avail, it doesn't do anything.

I now went with using

DirectoryIndex /content/page3/a/index.html index.html  

and replaced the relative references in the document with absolute ones. This works.

But firstly I would still prefer if the references could remain relative, so the document doesn't break in case the page3 folder is ever renamed, and secondly I'd rather have the /a subdirectory in the URL for clarity as to what is displayed.

How can I achieve this?

No protocol handler was valid for the URL /url. If you are using a DSO version of mod_proxy

Posted: 31 Oct 2021 04:03 PM PDT

Trying to set up a load balancer using Apache 2.4.x on Windows. Error: No protocol handler was valid for the URL /path/. If you are using a DSO version of mod_proxy, make sure the proxy submodules are included in the configuration using LoadModule.

Accessing webswing with websocket code deployed on jetty server.

Same configuration is working on Linux.

config file:

ProxyPass /path balancer://cluster/path/ timeout=600   ProxyPassReverse /path balancer://cluster/path/ timeout=600  ProxyRequests Off  ProxyTimeout 600    <Proxy "balancer://cluster">  Require valid-user   AuthName "ClosedProxy"  AuthType Basic  Order deny,allow  Allow from all  Satisfy Any    BalancerMember ws://server1 route=1 timeout=600   BalancerMember ws://server2 route=3 timeout=600  ProxySet stickysession=ROUTEID lbmethod=byrequests  </Proxy>  

Powershell Exchange Delete old Phone Sync Devices

Posted: 31 Oct 2021 09:06 PM PDT

I'm trying to run a Powershell Script that will clean up any Phones that haven't synced in at least 110 days with the Exchange 2013 Server.

My code will pull the data and export it to CSV but when I try to pipe in the Remove-MobileDevice command to delete the devices the script fails to do so. Nothing I found on the Internet has been of much help so far. Most are using the outdated ActiveSyncDevice cmdlets.

Here's my code, I'm new to PowerShell and appreciate any help:

Get-MobileDevice -result unlimited | Get-MobileDeviceStatistics | where {$_.LastSuccessSync -le (Get-Date).AddDays("-110")} | select devicetype, deviceidentity, deviceos, deviceuseragent, identity | Export-csv C:\PhoneSync\Logs\Stale_Devices_110days_$((Get-Date).ToString('MM-dd-yyyy_hh-mm-ss')).csv | foreach (Remove-MobileDevice -Identity DeviceUserAgent -confirm:$false)  

Turn on Gzip for combined JS or CSS files without file extension

Posted: 31 Oct 2021 04:03 PM PDT

I'm trying to configure GZip on my nginx server. It works for files with an file-extension.

To make a decision what kind of file is served over the network, Nginx does not analyze the file contents ... Instead, it just looks up the file extension to determine its MIME type

So when I have a combine css file without a file extension it doesn't know it needs to be gzipped and serves it plain.

Is there a way to let nginx know that everything served from a specified location always needs to be gzipped. With or without an file extension?

Percona XtraDB Cluster 5.6 does not start

Posted: 31 Oct 2021 08:06 PM PDT

All the good days. I want to run for test purposes Percona XtraDb Cluster on ubuntu 14.04. The basis here took these two articles

  1. https://habrahabr.ru/post/152969/
  2. https://www.percona.com/doc/percona-xtradb-cluster/5.6/manual/bootstrap.html

I came instead

Ну и в завершение, перезапускаем демона:

The daemon does not start

$ sudo /etc/init.d/mysql start     * Starting MySQL (Percona XtraDB Cluster) database server mysqld    * The server quit without updating PID file (/var/lib/mysql/vagrant-ubuntu-trusty-64.pid).     ...fail!   

The logs several errors. Error one:

[ERROR] Can't open the mysql.plugin table. Please run mysql_upgrade to create it.

Sure I ran mysql_upgrade, but it does not work

$ sudo mysql_upgrade Looking for 'mysql' as: mysql Looking for 'mysqlcheck' as: mysqlcheck FATAL ERROR: Upgrade failed

Error two:

160502 14:56:26 [ERROR] Plugin 'InnoDB' init function returned error. 160502 14:56:26 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 160502 14:56:26 [ERROR] Unknown/unsupported storage engine: InnoDB 160502 14:56:26 [ERROR] Aborting

All are advised to remove all of the folders / var / lib / mysql and supposedly starts. But I deleted and nothing worked.

My file my.cnf

[mysqld_safe]  # wsrep_urls=gcomm://192.168.33.101:3400,gcomm://192.168.33.102:3400,gcomm://  #wsrep_urls=gcomm://192.168.33.101:3400,gcomm://    [mysqld]  innodb_log_file_size=256M  wsrep_cluster_address=gcomm://192.168.33.101  port=3306  socket=/var/run/mysqld/mysqld.sock  datadir=/var/lib/mysql  basedir=/usr  user=mysql  log_error=/var/log/mysql.err  binlog_format=ROW  default_storage_engine=InnoDB  wsrep_provider=/usr/lib/libgalera_smm.so  wsrep_sst_receive_address=192.168.33.101:3500  wsrep_node_incoming_address=192.168.33.101  wsrep_slave_threads=2  wsrep_cluster_name=cluster0  wsrep_provider_options="gmcast.listen_addr=tcp://192.168.33.101:3400;"  wsrep_sst_method=xtrabackup  wsrep_sst_auth=backup:password  wsrep_node_name=node0  innodb_locks_unsafe_for_binlog=1  innodb_autoinc_lock_mode=2  innodb_buffer_pool_size=5000M  innodb_log_file_size=256M  innodb_log_buffer_size=4M    [client]  port=3306  socket=/var/run/mysqld/mysqld.sock  

Actually the question: how to start? If there is anyone working configuration, then please share.

redirect traffic from 127.0.0.1:5003 to external interface

Posted: 31 Oct 2021 10:01 PM PDT

I have an application that exposes Web Services on the loopback address 127.0.0.1:5003 so they are only available to the localhost. Is it possible to redirect traffic from there to the external interface so I can call the Web Services from other PCs on the network? I'm pretty sure this can be done by playing with the IP tables in Linux but I'm using windows 7.

Thanks in advance!

pgpool2 parallel mode: Non-superusers must provide a password in the connection string

Posted: 31 Oct 2021 05:02 PM PDT

I have two AWS RDS postgres nodes backing a parallel mode pgpool setup on EC2. After using pgbench to populate test tables, I get odd behavior from test queries. Any query that uses a function produces the error mentioned in the subject line, while other queries work as expected. Three examples showing success, expected failure, and unexpected failure:

Success -- Yields the expected record set:

psql -c "SELECT aid FROM pgbench_accounts" "host=localhost port=9999 user=pgpool password=pass dbname=bench_parallel"  # Giant record set is returned here.  

Since the backing nodes are on RDS, md5 authentication is required. Authentication appears to be working fine in the case of non-function queries, as can be seen by replacing the correct password above with an incorrect one.

Expected authentication failure:

psql -c "SELECT aid FROM pgbench_accounts" "host=localhost port=9999 user=pgpool password=notmypass dbname=bench_parallel"  psql: FATAL:  password authentication failed for user "pgpool"  

Here's the part that has me baffled -- If I put a function like min() or count() into the query, I get authentication problems:

psql -c "SELECT count(aid) FROM pgbench_accounts" "host=localhost port=9999 user=pgpool password=pass dbname=bench_parallel"  ERROR:  password is required  DETAIL:  Non-superusers must provide a password in the connection string.  

As can be seen from this last query, the password is supplied in the connection string (to the fronted, anyway) and it is the correct password as shown in the first query.

Why would my first query work fine with no auth problems, but the third one fail? Have I overlooked a setting somewhere?

Edit 2014-10-23: Adding more information.

I added superuser privileges to user pgpool on the (frontend) system database and no longer get Non-superusers must provide a password in the connection string as the error. Now I get:

ERROR:  could not establish connection  DETAIL:  fe_sendauth: no password supplied  

Turning on debugging for pgpool and looking in the log, I see the query being rewritten as the following, which, in the call to dblink, does not contain the password specified in the original connection string:

2014-10-23 19:59:10 DEBUG: pid 1643: OneNode_do_command: Query:  SELECT       sum(pool_g$0) AS count FROM       dblink('host=ip-10-1-2-17 dbname=bench_parallel port=9999 user=pgpool',      'SELECT pool_parallel("SELECT count(aid) FROM pgbench_accounts")',false)       AS pool_t$0g (pool_g$0 bigint )  

Can windows credentials be stored for 'All Users'?

Posted: 31 Oct 2021 09:06 PM PDT

I am looking for a way to store windows credentials for 'All Users' as opposed to individually-named Users in Win7. Issue - we have a company server that is being accessed by multiple users. Each user logs on to the server with their unique user credentials. While working on the server, each user has need to access paid-for-services via a state (as in ND) web site. When they click on the web site link for these services, they are presented with a Windows Security challenge. All unique users enter a common set of credentials (same username & password) for access to the state server. The user only has to enter the state credentials once and they are good the rest of the day even as they log off and log back on to our company server. The kicker is that all individual user profiles are auto-deleted every night for business reasons. The users are wondering if there was some way the state credentials can be stored so that no matter what user logs on to the company server, the state credentials will always be available when they try to access the state's paid-for-services, without having to type them in every day.

How to restore Ubuntu server on a VMWare image after disk failure?

Posted: 31 Oct 2021 08:06 PM PDT

After a disk failure on a VMWare GSX I was able to start the raid with one disk and copy the VMWare image to my ESXi server. After repairing the image with

vmkfstools -x repair /vmfs/volumes/source/vmname/vmname.vmdk  

and converting it to ESXi with

vmkfstools -i /vmfs/volumes/source/vmname/vmname.vmdk /vmfs/volumes/dest/vmname/vmname.vmdk -d thin  

I am not able to boot the image an just get

GRUB Loading stage1.5.    GRUB loading, please wait...  _  

and the cursor does not even blink.

What are my options now? Is it possible to recover somehow with a rescue CD? What are the steps?

UPDATE:

I followed the advice to create a new Ubuntu server and add the VMWare image as new disk. However I get the following.

mount: wrong fs type, bad option, bad superblock on /dev/sdb,   missing codepage or helper program, or other error   In some cases useful info is found in syslog - try   dmesg | tail or so  

I was trying to restore the superblock but had no luck with the following commands.

sudo mke2fs -n /dev/sdb  

The above printed several numbers (as described in http://linuxexpresso.wordpress.com/2010/03/31/repair-a-broken-ext4-superblock-in-ubuntu/).

e2fsck -b 20480000 /dev/sdb  

I just keep getting "The superblock could not be read...". Do I have any chance to get the data on this ext3 file system back?

MS SQL 2008 - Can I use Windows Authentication to connect from a Mac

Posted: 31 Oct 2021 06:00 PM PDT

I have been using Navicat SQL on Mac (Snow Leopard) to connect to MS 2005 via "Basic Auth" and all is good. However the DB is now being migrated to MS 2008 and try as I might I cant get on via Windows Auth. I get the message...

[FreeTDS][SQL Server]Login failed. The login is from an untrusted domain and cannot be used with Windows authentication. [FreeTDS][SQL Server]Unable to connect to data source

Any Ideas would be v greatfuly accepted. Many Thanks.

ErrorCode<ERRPS013>:SubStatus<ES0001>:Operation was aborted because user selected not to enable Cache with secondaries

Posted: 31 Oct 2021 10:01 PM PDT

I get this error when running Start/Stop/Restart-CacheCluster commands in Caching Administration Windows PowerShell console:

ErrorCode:SubStatus:Operation was aborted because user selected not to enable Cache with secondaries.

What am I missing here? Microsoft help does not list this error code here. Running v.1.1 of AppFabric on Windows7 x64 machine.

EDIT: I have a single host, but am running in cache cluster. Also this set-up used to work a couple of days before, but unfortunately can't tell what actions exactly led to it stopping working.

how to find out the valid store names for certutil

Posted: 31 Oct 2021 05:34 PM PDT

I'm trying to find a way to script installing a certificate.

Going "right-click->install certificate" works, and shows the certificate under 'subordinate certification authorities' in IE's certificate view

If found the certutil.exe command,

certutil.exe -addstore -enterprise <storename>  

My question is how do you list/find out the valid storenames?

Why is the chroot_local_user of vsftpd insecure?

Posted: 31 Oct 2021 06:15 PM PDT

I'm setting up on my VPS a vsftpd, and i don't want users be allowed to leave they're ftp home directory. I'm using local_user ftp, not anonymous, so I added:

chroot_local_user=YES

I've read in a lot of forum post, that this is unsecure.

  1. Why is this unsecure?
  2. If this is unsecure because of using ssh to join to my VPS as well, then I could just lock out these users from sshd, right?
  3. Is there an other option for achiving this behaviour of vsftpd? ( I dont want to remove read permissions on all folder/files for "world" on my system )

Using Gentoo's `ebegin`, `eend` etc under Ubuntu

Posted: 31 Oct 2021 07:06 PM PDT

We're quite fond of the style of the ebegin, eend, eerror, eindent etc commands used by Portage and other tools on Gentoo. The green-yellow-red bullets and standard layout make for very quick spotting of errors, on what would otherwise be very grey command line output.

#!/bin/sh  source /etc/init.d/functions.sh  ebegin "Copying data"  rsync ....  eend $?  

Producing output similar to:

 * Copying data...                                                       [ OK ]  

As a result we're using these commands in some of our common shell scripts, which is a problem for the people using Ubuntu and other linuxes. (linuces? linuxen? linucae? other distros)

On Gentoo these functions are provided by OpenRC, and imported with functions.sh file (whose exact position seems to vary slightly). But is there a simple way of getting these commands on Ubuntu?

In theory we could replace them all with dull echos, but we'd rather not?

No comments:

Post a Comment