Friday, December 31, 2021

Recent Questions - Server Fault

Recent Questions - Server Fault


"aureport -x --summary" shows -> /usr/sbin/sshd;61b30d72 (deleted)

Posted: 31 Dec 2021 05:39 AM PST

On one of the machines running Centos i.e.

cat /etc/redhat-release   CentOS Linux release 7.9.2009 (Core)  

i found something strange by the command aureport -x --summary

 aureport -x --summary    Executable Summary Report  =================================  total  file  =================================  19328  /usr/bin/rpm  11802  /usr/sbin/crond  7713  /usr/sbin/sshd  4201  /usr/bin/grep  1564  /usr/libexec/postfix/pickup  1031  /usr/sbin/libvirtd  891  /usr/sbin/logrotate  866  /usr/sbin/unix_chkpwd  785  /usr/lib/systemd/systemd-logind  704  /usr/bin/ps  541  /usr/bin/su  302  /usr/bin/bash  295  /usr/sbin/xtables-multi  294  /usr/lib/systemd/systemd  222  /usr/bin/sudo  171  /usr/bin/id  135  /usr/bin/systemd-tmpfiles  66  /usr/bin/python2.7  48  /usr/bin/date  46  /usr/sbin/brctl  41  /usr/bin/ls  32  /usr/bin/ssh  31  /usr/bin/diff  30  /usr/sbin/sendmail.postfix  29  /usr/sbin/anacron  27  /usr/lib/polkit-1/polkitd  27  /usr/bin/pkla-check-authorization  24  /usr/libexec/postfix/cleanup  24  /usr/libexec/postfix/trivial-rewrite  24  /usr/libexec/postfix/local  20  /usr/sbin/virtlogd  18  /usr/sbin/postdrop  15  /usr/sbin/ebtables-restore  10  /usr/bin/kmod  6  /usr/bin/vim  6  /usr/libexec/postfix/master  5  /usr/sbin/sshd;61b30d72 (deleted)  4  /usr/bin/ssh-keygen  3  /usr/sbin/postfix  3  /usr/sbin/postlog  3  /usr/lib/systemd/systemd-update-utmp  3  /usr/sbin/autrace  2  /usr/bin/cpio  1  /usr/bin/getent  1  /usr/bin/chown  1  /usr/sbin/ip  

what does "61b30d72 (deleted)" means

rkhunter does not show any warrning or susspect files! i.e.

rkhunter --update --propupd  [ Rootkit Hunter version 1.4.6 ]  

and then

rkhunter -c -sk  

!!!all green!!!

what 61b30d72 means?

CentOS 7 , OpenVPN Server Radius Plugin

Posted: 31 Dec 2021 04:28 AM PST

on my new openvpn server install radius plugin can not read client status it worked on previous installtion, now all things is the same but not working help me please

on server log it shows this:

RADIUS-PLUGIN: BACKGROUND ACCT: No accounting data was found for user01

use "rewrite" and "try_files" together [Nginx]

Posted: 31 Dec 2021 04:35 AM PST

I removed the ".php" suffix at the end of the PHP files on the Nginx server with the following code, but this time I cannot send some data to the server.

try_files $uri/ $uri.html $uri.php$is_args$query_string;  

Some links on the site are sent with Ajax, and the ".php" extension is not available at the end of these links. E.g; https://panel.example.com/app/controller/ajax/collect

For example, when I want to access the "/collect" file that I want to access via Ajax or directly, I get the error "File not found". Because I do "rewrite" with the code below and provide a clean URL.

rewrite ^/([^/]+)/([^/]+)?$ /index.php?cmd=$1&scd=$2 last;  rewrite ^/([^/]+)/?$ /index.php?cmd=$1 last;  

Sample link: https://panel.example.com/[details|cat|profile]/[subPages(productID, username..)]

As a result, the above codes are correct and working, but not working at the same time together. How can I run these two codes at the same time?

Bind not working, what is wrong in my configuration?

Posted: 31 Dec 2021 02:57 AM PST

Following my previous question Dnsmasq server does not work when configured as the primary DNS in my router where I unsuccessfully set up a DNS server on a vagrant virtual machine, I decided to switch to use bind on an actual physical MacOS machine to make sure everything works.

I have installed Bind on MacOS and have the following named.conf

logging {      category default {          _default_log;      };      channel _default_log {          file "/usr/local/var/log/named/named.log" versions 10 size 1m;          severity info;          print-time yes;      };  };    options {      listen-on port 53 { 127.0.0.1;192.168.2.11; };      directory "/usr/local/var/named";      allow-query     { localhost;192.168.2.11/24; };      recursion yes;      dnssec-enable yes;      dnssec-validation yes;        forwarders {          192.168.1.1;      ;  };    zone "mysite.com" {      type master;      file "mysite.com.zone";  };  

And the content of "mysite.com.zone" is as follows:

$ORIGIN mysite.com.  $TTL                   86400  @                      IN       SOA     ns.mysite.com admin.mysite.com ( 2021123100 28800 3600000 86400 )      @                      IN       NS      ns.mysite.com  @                      IN       A       192.168.2.11  test1                  IN       A       192.168.2.150  test2                  IN       A       192.168.2.200  

However after sudo brew services restart bind I cannot access test1.mysite.com. I can still access 192.168.2.150 directly though.

Some clarification: MacOS have the IP address 192.168.2.11 in my local network. I have also added 127.0.0.1 in my MacOS Network settings DNS page.

I am very new to DNS and am stuck at this point. Any help would be appreciated.

Netplan - Error in network definition: Updated definition changes device type

Posted: 31 Dec 2021 02:52 AM PST

I'm trying to set a static ip on Ubuntu.

ip a:

1: lo <LOOPBACK, ...     ...  2: epn2s0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu ...     link/ether 5d:43:... brd ff:ff...  3: wkp1s0: <BROADCAST,MULTICAST,...     link/ether d4:a1:...     inet 192.168.1.15/24 brd 192.168.1.255 ...        valid_lft ...     inet6 ...        valid_lft ...  

So my adapter is the 3rd one, wkp1s0.

But in my default netplan yaml file, it is just the basic file with epn2s0 as dhcp: true

When I change the yaml file to set wkp1s0 to static ip, then run netplan try, it gives:

Error in network definition: Updated definition 'wkp1s0' changes device type  wkp1s0  ^  

What am I doing wrong?

Google Compute Engine Debian VM, firewall rules only apply to IPV6

Posted: 31 Dec 2021 02:18 AM PST

I am using a Debian VM on Google Cloud working as an API provider. I access the API from Android on tcp port 30300 and it works OK. I also access the API from a C++ App running on microcontrollers and it also works OK. Then I decided to also provide the means to access the API from PHP and it is not working.

From an external server running Apache the API behaves as if it is not acessible, the API monitor does not show any access at all.

Then I moved the PHP script to the same server running the API and just changed the target server from its web address to http://127.0.0.1:30300/alprbr and it works OK.

The I checked the Compute Engine firewall rules closely and found the rule only allows for IPV6 and I found no way to allow trafic on the public IP address for IPV4.

I don't really know if this is the cause of the problem but seems something to be sorted out before trying something else.

Google Cloud Firewall Rule:

alprbr  Input  Apply to all  IP range: 0.0.0.0/0  tcp:30300  Allow  1000  default  

netstat -an | grep "LISTEN" returns:

tcp        0      0 127.0.0.1:3306          0.0.0.0:*               LISTEN       tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN       tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN       tcp        0      0 0.0.0.0:1880            0.0.0.0:*               LISTEN       tcp6       0      0 :::22                   :::*                    LISTEN       tcp6       0      0 :::30300                :::*                    LISTEN  

What Am I doing wrong? Assistance welcome.

TCPDUMP Order of Operations: exclude and include

Posted: 31 Dec 2021 05:03 AM PST

Trying to look at multicast traffic so I created a filter to monitor the range, then began to slowly add statements to exclude things not relevant but didnt get expected results. Do you do the opposite when writing, so put narrow excluding statements first then tac on large overarching statements at the end?

Failed attempt:

tcpdump -i any -s0 net 224.0.0.0/4 && not net 239.254.127.63/32 && not net 233.89.188.1/32 && not arp

Mounting EC2 directory with existing data to Fargate container using EFS

Posted: 31 Dec 2021 01:18 AM PST

I have an EC2 instance with a huge directory(ex. /large-dir) that containers need to access. Both instance and container share same network and security group. I'm able create and mount an EFS to container as well as add/remove files from the EFS after it is mounted.

On the instance there are processes that constantly write to /large-dir, so I cannot rename or move this directory.

At the moment I cannot mount EFS on the EC2 instance because that kind of overwrites directory data with EFS data(which is empty)

This would be easy if I was setting up from scratch(Initial empty dir on EC2 -> Mount EFS on EC2 -> Start processes that write to dir/efs -> Containers spin up randomly and mount the efs and have access to data on EC2)

Is there a way to sync the /large-dir on EC2 with EFS so any modifications are automatically available to the EFS and therefore available to containers that mount it?

Dnsmasq server does not work when configured as the primary DNS in my router

Posted: 31 Dec 2021 01:53 AM PST

I have a LAN with IP range 192.168.2.* and the following configuration:

  1. There is a router of address 192.168.2.1

  2. There is a MacOS machine of address 192.168.2.11

  3. There is a vagrant (virtual box) CentOS 7 machine of address 192.168.2.150, with httpd installed and accessible via http://192.168.2.150/index.html from the MacOS machine using Google Chrome

  4. There is another vagrant (virtual box) CentOS 7 machine of address 192.168.2.250, with dnsmasq installed and configured with host file line 192.168.2.150 test.mysite.com

Here is what worked:

i. In the MacOS machine's system preference, open "Network" and "Advanced", choose "DNS" tab and add the DNS server 192.168.2.250

ii. Access http://test.mysite.com/index.html from Google Chrome in MacOS, the web page will appear correctly

Here is what did not work:

i. Remove all entries from MacOS's system preference

ii. Open the router's admin page at 192.168.2.1 and set the Primary DNS address to be 192.168.2.250

iii. Access http://test.mysite.com/index.html from Google Chrome in MacOS, the web page will load for a minute and show an error cannot reach the site (DNS_PROBE_FINISHED_BAD_CONFIG), nslookup on the MacOS machine shows ";; connection timed out; no servers could be reached"

sudo systemctl status dnsmasq shows localhost.localdomain dnsmasq[7102]: read /etc/hosts - 4 addresses so hosts file is being read on the dnsmasq site

I think there's something wrong in my dnsmasq.conf but I could not figure out. The current complete conf is shown as below:

# Configuration file for dnsmasq.  #  # Format is one option per line, legal options are the same  # as the long options legal on the command line. See  # "/usr/sbin/dnsmasq --help" or "man 8 dnsmasq" for details.    # Listen on this specific port instead of the standard DNS port  # (53). Setting this to zero completely disables DNS function,  # leaving only DHCP and/or TFTP.  #port=5353    # The following two options make you a better netizen, since they  # tell dnsmasq to filter out queries which the public DNS cannot  # answer, and which load the servers (especially the root servers)  # unnecessarily. If you have a dial-on-demand link they also stop  # these requests from bringing up the link unnecessarily.    # Never forward plain names (without a dot or domain part)  domain-needed  # Never forward addresses in the non-routed address spaces.  bogus-priv    # Uncomment these to enable DNSSEC validation and caching:  # (Requires dnsmasq to be built with DNSSEC option.)  #conf-file=%%PREFIX%%/share/dnsmasq/trust-anchors.conf  #dnssec    # Replies which are not DNSSEC signed may be legitimate, because the domain  # is unsigned, or may be forgeries. Setting this option tells dnsmasq to  # check that an unsigned reply is OK, by finding a secure proof that a DS   # record somewhere between the root and the domain does not exist.   # The cost of setting this is that even queries in unsigned domains will need  # one or more extra DNS queries to verify.  #dnssec-check-unsigned    # Uncomment this to filter useless windows-originated DNS requests  # which can trigger dial-on-demand links needlessly.  # Note that (amongst other things) this blocks all SRV requests,  # so don't use it if you use eg Kerberos, SIP, XMMP or Google-talk.  # This option only affects forwarding, SRV records originating for  # dnsmasq (via srv-host= lines) are not suppressed by it.  #filterwin2k    # Change this line if you want dns to get its upstream servers from  # somewhere other that /etc/resolv.conf  #resolv-file=    # By  default,  dnsmasq  will  send queries to any of the upstream  # servers it knows about and tries to favour servers to are  known  # to  be  up.  Uncommenting this forces dnsmasq to try each query  # with  each  server  strictly  in  the  order  they   appear   in  # /etc/resolv.conf  strict-order    # If you don't want dnsmasq to read /etc/resolv.conf or any other  # file, getting its servers from this file instead (see below), then  # uncomment this.  #no-resolv    # If you don't want dnsmasq to poll /etc/resolv.conf or other resolv  # files for changes and re-read them then uncomment this.  #no-poll    # Add other name servers here, with domain specs if they are for  # non-public domains.  #server=/localnet/192.168.0.1    # Example of routing PTR queries to nameservers: this will send all  # address->name queries for 192.168.3/24 to nameserver 10.1.2.3  #server=/3.168.192.in-addr.arpa/10.1.2.3    # Add local-only domains here, queries in these domains are answered  # from /etc/hosts or DHCP only.  #local=/localnet/    # Add domains which you want to force to an IP address here.  # The example below send any host in double-click.net to a local  # web-server.  #address=/double-click.net/127.0.0.1    # --address (and --server) work with IPv6 addresses too.  #address=/www.thekelleys.org.uk/fe80::20d:60ff:fe36:f83    # Add the IPs of all queries to yahoo.com, google.com, and their  # subdomains to the vpn and search ipsets:  #ipset=/yahoo.com/google.com/vpn,search    # You can control how dnsmasq talks to a server: this forces  # queries to 10.1.2.3 to be routed via eth1  # server=10.1.2.3@eth1    # and this sets the source (ie local) address used to talk to  # 10.1.2.3 to 192.168.1.1 port 55 (there must be a interface with that  # IP on the machine, obviously).  # server=10.1.2.3@192.168.1.1#55    # If you want dnsmasq to change uid and gid to something other  # than the default, edit the following lines.  #user=  #group=    # If you want dnsmasq to listen for DHCP and DNS requests only on  # specified interfaces (and the loopback) give the name of the  # interface (eg eth0) here.  # Repeat the line for more than one interface.  #interface=  # Or you can specify which interface _not_ to listen on  #except-interface=  # Or which to listen on by address (remember to include 127.0.0.1 if  # you use this.)  listen-address=127.0.0.1,192.168.2.250  # If you want dnsmasq to provide only DNS service on an interface,  # configure it as shown above, and then use the following line to  # disable DHCP and TFTP on it.  #no-dhcp-interface=    # On systems which support it, dnsmasq binds the wildcard address,  # even when it is listening on only some interfaces. It then discards  # requests that it shouldn't reply to. This has the advantage of  # working even when interfaces come and go and change address. If you  # want dnsmasq to really bind only the interfaces it is listening on,  # uncomment this option. About the only time you may need this is when  # running another nameserver on the same machine.  #bind-interfaces    # If you don't want dnsmasq to read /etc/hosts, uncomment the  # following line.  #no-hosts  # or if you want it to read another file, as well as /etc/hosts, use  # this.  #addn-hosts=/etc/banner_add_hosts    # Set this (and domain: see below) if you want to have a domain  # automatically added to simple names in a hosts-file.  #expand-hosts    # Set the domain for dnsmasq. this is optional, but if it is set, it  # does the following things.  # 1) Allows DHCP hosts to have fully qualified domain names, as long  #     as the domain part matches this setting.  # 2) Sets the "domain" DHCP option thereby potentially setting the  #    domain of all systems configured by DHCP  # 3) Provides the domain part for "expand-hosts"  #domain=thekelleys.org.uk    # Set a different domain for a particular subnet  #domain=wireless.thekelleys.org.uk,192.168.2.0/24    # Same idea, but range rather then subnet  #domain=reserved.thekelleys.org.uk,192.68.3.100,192.168.3.200    # Uncomment this to enable the integrated DHCP server, you need  # to supply the range of addresses available for lease and optionally  # a lease time. If you have more than one network, you will need to  # repeat this for each network on which you want to supply DHCP  # service.  #dhcp-range=192.168.0.50,192.168.0.150,12h    # This is an example of a DHCP range where the netmask is given. This  # is needed for networks we reach the dnsmasq DHCP server via a relay  # agent. If you don't know what a DHCP relay agent is, you probably  # don't need to worry about this.  #dhcp-range=192.168.0.50,192.168.0.150,255.255.255.0,12h    # This is an example of a DHCP range which sets a tag, so that  # some DHCP options may be set only for this network.  #dhcp-range=set:red,192.168.0.50,192.168.0.150    # Use this DHCP range only when the tag "green" is set.  #dhcp-range=tag:green,192.168.0.50,192.168.0.150,12h    # Specify a subnet which can't be used for dynamic address allocation,  # is available for hosts with matching --dhcp-host lines. Note that  # dhcp-host declarations will be ignored unless there is a dhcp-range  # of some type for the subnet in question.  # In this case the netmask is implied (it comes from the network  # configuration on the machine running dnsmasq) it is possible to give  # an explicit netmask instead.  #dhcp-range=192.168.0.0,static    # Enable DHCPv6. Note that the prefix-length does not need to be specified  # and defaults to 64 if missing/  #dhcp-range=1234::2, 1234::500, 64, 12h    # Do Router Advertisements, BUT NOT DHCP for this subnet.  #dhcp-range=1234::, ra-only     # Do Router Advertisements, BUT NOT DHCP for this subnet, also try and  # add names to the DNS for the IPv6 address of SLAAC-configured dual-stack   # hosts. Use the DHCPv4 lease to derive the name, network segment and   # MAC address and assume that the host will also have an  # IPv6 address calculated using the SLAAC alogrithm.  #dhcp-range=1234::, ra-names    # Do Router Advertisements, BUT NOT DHCP for this subnet.  # Set the lifetime to 46 hours. (Note: minimum lifetime is 2 hours.)  #dhcp-range=1234::, ra-only, 48h    # Do DHCP and Router Advertisements for this subnet. Set the A bit in the RA  # so that clients can use SLAAC addresses as well as DHCP ones.  #dhcp-range=1234::2, 1234::500, slaac    # Do Router Advertisements and stateless DHCP for this subnet. Clients will  # not get addresses from DHCP, but they will get other configuration information.  # They will use SLAAC for addresses.  #dhcp-range=1234::, ra-stateless    # Do stateless DHCP, SLAAC, and generate DNS names for SLAAC addresses  # from DHCPv4 leases.  #dhcp-range=1234::, ra-stateless, ra-names    # Do router advertisements for all subnets where we're doing DHCPv6  # Unless overriden by ra-stateless, ra-names, et al, the router   # advertisements will have the M and O bits set, so that the clients  # get addresses and configuration from DHCPv6, and the A bit reset, so the   # clients don't use SLAAC addresses.  #enable-ra    # Supply parameters for specified hosts using DHCP. There are lots  # of valid alternatives, so we will give examples of each. Note that  # IP addresses DO NOT have to be in the range given above, they just  # need to be on the same network. The order of the parameters in these  # do not matter, it's permissible to give name, address and MAC in any  # order.    # Always allocate the host with Ethernet address 11:22:33:44:55:66  # The IP address 192.168.0.60  #dhcp-host=11:22:33:44:55:66,192.168.0.60    # Always set the name of the host with hardware address  # 11:22:33:44:55:66 to be "fred"  #dhcp-host=11:22:33:44:55:66,fred    # Always give the host with Ethernet address 11:22:33:44:55:66  # the name fred and IP address 192.168.0.60 and lease time 45 minutes  #dhcp-host=11:22:33:44:55:66,fred,192.168.0.60,45m    # Give a host with Ethernet address 11:22:33:44:55:66 or  # 12:34:56:78:90:12 the IP address 192.168.0.60. Dnsmasq will assume  # that these two Ethernet interfaces will never be in use at the same  # time, and give the IP address to the second, even if it is already  # in use by the first. Useful for laptops with wired and wireless  # addresses.  #dhcp-host=11:22:33:44:55:66,12:34:56:78:90:12,192.168.0.60    # Give the machine which says its name is "bert" IP address  # 192.168.0.70 and an infinite lease  #dhcp-host=bert,192.168.0.70,infinite    # Always give the host with client identifier 01:02:02:04  # the IP address 192.168.0.60  #dhcp-host=id:01:02:02:04,192.168.0.60    # Always give the Infiniband interface with hardware address  # 80:00:00:48:fe:80:00:00:00:00:00:00:f4:52:14:03:00:28:05:81 the  # ip address 192.168.0.61. The client id is derived from the prefix  # ff:00:00:00:00:00:02:00:00:02:c9:00 and the last 8 pairs of  # hex digits of the hardware address.  #dhcp-host=id:ff:00:00:00:00:00:02:00:00:02:c9:00:f4:52:14:03:00:28:05:81,192.168.0.61    # Always give the host with client identifier "marjorie"  # the IP address 192.168.0.60  #dhcp-host=id:marjorie,192.168.0.60    # Enable the address given for "judge" in /etc/hosts  # to be given to a machine presenting the name "judge" when  # it asks for a DHCP lease.  #dhcp-host=judge    # Never offer DHCP service to a machine whose Ethernet  # address is 11:22:33:44:55:66  #dhcp-host=11:22:33:44:55:66,ignore    # Ignore any client-id presented by the machine with Ethernet  # address 11:22:33:44:55:66. This is useful to prevent a machine  # being treated differently when running under different OS's or  # between PXE boot and OS boot.  #dhcp-host=11:22:33:44:55:66,id:*    # Send extra options which are tagged as "red" to  # the machine with Ethernet address 11:22:33:44:55:66  #dhcp-host=11:22:33:44:55:66,set:red    # Send extra options which are tagged as "red" to  # any machine with Ethernet address starting 11:22:33:  #dhcp-host=11:22:33:*:*:*,set:red    # Give a fixed IPv6 address and name to client with   # DUID 00:01:00:01:16:d2:83:fc:92:d4:19:e2:d8:b2  # Note the MAC addresses CANNOT be used to identify DHCPv6 clients.  # Note also the they [] around the IPv6 address are obilgatory.  #dhcp-host=id:00:01:00:01:16:d2:83:fc:92:d4:19:e2:d8:b2, fred, [1234::5]     # Ignore any clients which are not specified in dhcp-host lines  # or /etc/ethers. Equivalent to ISC "deny unknown-clients".  # This relies on the special "known" tag which is set when  # a host is matched.  #dhcp-ignore=tag:!known    # Send extra options which are tagged as "red" to any machine whose  # DHCP vendorclass string includes the substring "Linux"  #dhcp-vendorclass=set:red,Linux    # Send extra options which are tagged as "red" to any machine one  # of whose DHCP userclass strings includes the substring "accounts"  #dhcp-userclass=set:red,accounts    # Send extra options which are tagged as "red" to any machine whose  # MAC address matches the pattern.  #dhcp-mac=set:red,00:60:8C:*:*:*    # If this line is uncommented, dnsmasq will read /etc/ethers and act  # on the ethernet-address/IP pairs found there just as if they had  # been given as --dhcp-host options. Useful if you keep  # MAC-address/host mappings there for other purposes.  #read-ethers    # Send options to hosts which ask for a DHCP lease.  # See RFC 2132 for details of available options.  # Common options can be given to dnsmasq by name:  # run "dnsmasq --help dhcp" to get a list.  # Note that all the common settings, such as netmask and  # broadcast address, DNS server and default route, are given  # sane defaults by dnsmasq. You very likely will not need  # any dhcp-options. If you use Windows clients and Samba, there  # are some options which are recommended, they are detailed at the  # end of this section.    # Override the default route supplied by dnsmasq, which assumes the  # router is the same machine as the one running dnsmasq.  #dhcp-option=3,192.168.2.1    # Do the same thing, but using the option name  #dhcp-option=option:router,1.2.3.4    # Override the default route supplied by dnsmasq and send no default  # route at all. Note that this only works for the options sent by  # default (1, 3, 6, 12, 28) the same line will send a zero-length option  # for all other option numbers.  #dhcp-option=3    # Set the NTP time server addresses to 192.168.0.4 and 10.10.0.5  #dhcp-option=option:ntp-server,192.168.0.4,10.10.0.5    # Send DHCPv6 option. Note [] around IPv6 addresses.  #dhcp-option=option6:dns-server,[1234::77],[1234::88]    # Send DHCPv6 option for namservers as the machine running   # dnsmasq and another.  #dhcp-option=option6:dns-server,[::],[1234::88]    # Ask client to poll for option changes every six hours. (RFC4242)  #dhcp-option=option6:information-refresh-time,6h    # Set option 58 client renewal time (T1). Defaults to half of the  # lease time if not specified. (RFC2132)  #dhcp-option=option:T1:1m    # Set option 59 rebinding time (T2). Defaults to 7/8 of the  # lease time if not specified. (RFC2132)  #dhcp-option=option:T2:2m    # Set the NTP time server address to be the same machine as  # is running dnsmasq  #dhcp-option=42,0.0.0.0    # Set the NIS domain name to "welly"  #dhcp-option=40,welly    # Set the default time-to-live to 50  #dhcp-option=23,50    # Set the "all subnets are local" flag  #dhcp-option=27,1    # Send the etherboot magic flag and then etherboot options (a string).  #dhcp-option=128,e4:45:74:68:00:00  #dhcp-option=129,NIC=eepro100    # Specify an option which will only be sent to the "red" network  # (see dhcp-range for the declaration of the "red" network)  # Note that the tag: part must precede the option: part.  #dhcp-option = tag:red, option:ntp-server, 192.168.1.1    # The following DHCP options set up dnsmasq in the same way as is specified  # for the ISC dhcpcd in  # http://www.samba.org/samba/ftp/docs/textdocs/DHCP-Server-Configuration.txt  # adapted for a typical dnsmasq installation where the host running  # dnsmasq is also the host running samba.  # you may want to uncomment some or all of them if you use  # Windows clients and Samba.  #dhcp-option=19,0           # option ip-forwarding off  #dhcp-option=44,0.0.0.0     # set netbios-over-TCP/IP nameserver(s) aka WINS server(s)  #dhcp-option=45,0.0.0.0     # netbios datagram distribution server  #dhcp-option=46,8           # netbios node type    # Send an empty WPAD option. This may be REQUIRED to get windows 7 to behave.  #dhcp-option=252,"\n"    # Send RFC-3397 DNS domain search DHCP option. WARNING: Your DHCP client  # probably doesn't support this......  #dhcp-option=option:domain-search,eng.apple.com,marketing.apple.com    # Send RFC-3442 classless static routes (note the netmask encoding)  #dhcp-option=121,192.168.1.0/24,1.2.3.4,10.0.0.0/8,5.6.7.8    # Send vendor-class specific options encapsulated in DHCP option 43.  # The meaning of the options is defined by the vendor-class so  # options are sent only when the client supplied vendor class  # matches the class given here. (A substring match is OK, so "MSFT"  # matches "MSFT" and "MSFT 5.0"). This example sets the  # mtftp address to 0.0.0.0 for PXEClients.  #dhcp-option=vendor:PXEClient,1,0.0.0.0    # Send microsoft-specific option to tell windows to release the DHCP lease  # when it shuts down. Note the "i" flag, to tell dnsmasq to send the  # value as a four-byte integer - that's what microsoft wants. See  # http://technet2.microsoft.com/WindowsServer/en/library/a70f1bb7-d2d4-49f0-96d6-4b7414ecfaae1033.mspx?mfr=true  #dhcp-option=vendor:MSFT,2,1i    # Send the Encapsulated-vendor-class ID needed by some configurations of  # Etherboot to allow is to recognise the DHCP server.  #dhcp-option=vendor:Etherboot,60,"Etherboot"    # Send options to PXELinux. Note that we need to send the options even  # though they don't appear in the parameter request list, so we need  # to use dhcp-option-force here.  # See http://syslinux.zytor.com/pxe.php#special for details.  # Magic number - needed before anything else is recognised  #dhcp-option-force=208,f1:00:74:7e  # Configuration file name  #dhcp-option-force=209,configs/common  # Path prefix  #dhcp-option-force=210,/tftpboot/pxelinux/files/  # Reboot time. (Note 'i' to send 32-bit value)  #dhcp-option-force=211,30i    # Set the boot filename for netboot/PXE. You will only need  # this is you want to boot machines over the network and you will need  # a TFTP server; either dnsmasq's built in TFTP server or an  # external one. (See below for how to enable the TFTP server.)  #dhcp-boot=pxelinux.0    # The same as above, but use custom tftp-server instead machine running dnsmasq  #dhcp-boot=pxelinux,server.name,192.168.1.100    # Boot for Etherboot gPXE. The idea is to send two different  # filenames, the first loads gPXE, and the second tells gPXE what to  # load. The dhcp-match sets the gpxe tag for requests from gPXE.  #dhcp-match=set:gpxe,175 # gPXE sends a 175 option.  #dhcp-boot=tag:!gpxe,undionly.kpxe  #dhcp-boot=mybootimage    # Encapsulated options for Etherboot gPXE. All the options are  # encapsulated within option 175  #dhcp-option=encap:175, 1, 5b         # priority code  #dhcp-option=encap:175, 176, 1b       # no-proxydhcp  #dhcp-option=encap:175, 177, string   # bus-id  #dhcp-option=encap:175, 189, 1b       # BIOS drive code  #dhcp-option=encap:175, 190, user     # iSCSI username  #dhcp-option=encap:175, 191, pass     # iSCSI password    # Test for the architecture of a netboot client. PXE clients are  # supposed to send their architecture as option 93. (See RFC 4578)  #dhcp-match=peecees, option:client-arch, 0 #x86-32  #dhcp-match=itanics, option:client-arch, 2 #IA64  #dhcp-match=hammers, option:client-arch, 6 #x86-64  #dhcp-match=mactels, option:client-arch, 7 #EFI x86-64    # Do real PXE, rather than just booting a single file, this is an  # alternative to dhcp-boot.  #pxe-prompt="What system shall I netboot?"  # or with timeout before first available action is taken:  #pxe-prompt="Press F8 for menu.", 60    # Available boot services. for PXE.  #pxe-service=x86PC, "Boot from local disk"    # Loads <tftp-root>/pxelinux.0 from dnsmasq TFTP server.  #pxe-service=x86PC, "Install Linux", pxelinux    # Loads <tftp-root>/pxelinux.0 from TFTP server at 1.2.3.4.  # Beware this fails on old PXE ROMS.  #pxe-service=x86PC, "Install Linux", pxelinux, 1.2.3.4    # Use bootserver on network, found my multicast or broadcast.  #pxe-service=x86PC, "Install windows from RIS server", 1    # Use bootserver at a known IP address.  #pxe-service=x86PC, "Install windows from RIS server", 1, 1.2.3.4    # If you have multicast-FTP available,  # information for that can be passed in a similar way using options 1  # to 5. See page 19 of  # http://download.intel.com/design/archives/wfm/downloads/pxespec.pdf      # Enable dnsmasq's built-in TFTP server  #enable-tftp    # Set the root directory for files available via FTP.  #tftp-root=/var/ftpd    # Do not abort if the tftp-root is unavailable  #tftp-no-fail    # Make the TFTP server more secure: with this set, only files owned by  # the user dnsmasq is running as will be send over the net.  #tftp-secure    # This option stops dnsmasq from negotiating a larger blocksize for TFTP  # transfers. It will slow things down, but may rescue some broken TFTP  # clients.  #tftp-no-blocksize    # Set the boot file name only when the "red" tag is set.  #dhcp-boot=tag:red,pxelinux.red-net    # An example of dhcp-boot with an external TFTP server: the name and IP  # address of the server are given after the filename.  # Can fail with old PXE ROMS. Overridden by --pxe-service.  #dhcp-boot=/var/ftpd/pxelinux.0,boothost,192.168.0.3    # If there are multiple external tftp servers having a same name  # (using /etc/hosts) then that name can be specified as the  # tftp_servername (the third option to dhcp-boot) and in that  # case dnsmasq resolves this name and returns the resultant IP  # addresses in round robin fasion. This facility can be used to  # load balance the tftp load among a set of servers.  #dhcp-boot=/var/ftpd/pxelinux.0,boothost,tftp_server_name    # Set the limit on DHCP leases, the default is 150  #dhcp-lease-max=150    # The DHCP server needs somewhere on disk to keep its lease database.  # This defaults to a sane location, but if you want to change it, use  # the line below.  #dhcp-leasefile=/var/lib/dnsmasq/dnsmasq.leases    # Set the DHCP server to authoritative mode. In this mode it will barge in  # and take over the lease for any client which broadcasts on the network,  # whether it has a record of the lease or not. This avoids long timeouts  # when a machine wakes up on a new network. DO NOT enable this if there's  # the slightest chance that you might end up accidentally configuring a DHCP  # server for your campus/company accidentally. The ISC server uses  # the same option, and this URL provides more information:  # http://www.isc.org/files/auth.html  #dhcp-authoritative    # Run an executable when a DHCP lease is created or destroyed.  # The arguments sent to the script are "add" or "del",  # then the MAC address, the IP address and finally the hostname  # if there is one.  #dhcp-script=/bin/echo    # Set the cachesize here.  #cache-size=150    # If you want to disable negative caching, uncomment this.  #no-negcache    # Normally responses which come from /etc/hosts and the DHCP lease  # file have Time-To-Live set as zero, which conventionally means  # do not cache further. If you are happy to trade lower load on the  # server for potentially stale date, you can set a time-to-live (in  # seconds) here.  #local-ttl=    # If you want dnsmasq to detect attempts by Verisign to send queries  # to unregistered .com and .net hosts to its sitefinder service and  # have dnsmasq instead return the correct NXDOMAIN response, uncomment  # this line. You can add similar lines to do the same for other  # registries which have implemented wildcard A records.  #bogus-nxdomain=64.94.110.11    # If you want to fix up DNS results from upstream servers, use the  # alias option. This only works for IPv4.  # This alias makes a result of 1.2.3.4 appear as 5.6.7.8  #alias=1.2.3.4,5.6.7.8  # and this maps 1.2.3.x to 5.6.7.x  #alias=1.2.3.0,5.6.7.0,255.255.255.0  # and this maps 192.168.0.10->192.168.0.40 to 10.0.0.10->10.0.0.40  #alias=192.168.0.10-192.168.0.40,10.0.0.0,255.255.255.0    # Change these lines if you want dnsmasq to serve MX records.    # Return an MX record named "maildomain.com" with target  # servermachine.com and preference 50  #mx-host=maildomain.com,servermachine.com,50    # Set the default target for MX records created using the localmx option.  #mx-target=servermachine.com    # Return an MX record pointing to the mx-target for all local  # machines.  #localmx    # Return an MX record pointing to itself for all local machines.  #selfmx    # Change the following lines if you want dnsmasq to serve SRV  # records.  These are useful if you want to serve ldap requests for  # Active Directory and other windows-originated DNS requests.  # See RFC 2782.  # You may add multiple srv-host lines.  # The fields are <name>,<target>,<port>,<priority>,<weight>  # If the domain part if missing from the name (so that is just has the  # service and protocol sections) then the domain given by the domain=  # config option is used. (Note that expand-hosts does not need to be  # set for this to work.)    # A SRV record sending LDAP for the example.com domain to  # ldapserver.example.com port 389  #srv-host=_ldap._tcp.example.com,ldapserver.example.com,389    # A SRV record sending LDAP for the example.com domain to  # ldapserver.example.com port 389 (using domain=)  #domain=example.com  #srv-host=_ldap._tcp,ldapserver.example.com,389    # Two SRV records for LDAP, each with different priorities  #srv-host=_ldap._tcp.example.com,ldapserver.example.com,389,1  #srv-host=_ldap._tcp.example.com,ldapserver.example.com,389,2    # A SRV record indicating that there is no LDAP server for the domain  # example.com  #srv-host=_ldap._tcp.example.com    # The following line shows how to make dnsmasq serve an arbitrary PTR  # record. This is useful for DNS-SD. (Note that the  # domain-name expansion done for SRV records _does_not  # occur for PTR records.)  #ptr-record=_http._tcp.dns-sd-services,"New Employee Page._http._tcp.dns-sd-services"    # Change the following lines to enable dnsmasq to serve TXT records.  # These are used for things like SPF and zeroconf. (Note that the  # domain-name expansion done for SRV records _does_not  # occur for TXT records.)    #Example SPF.  #txt-record=example.com,"v=spf1 a -all"    #Example zeroconf  #txt-record=_http._tcp.example.com,name=value,paper=A4    # Provide an alias for a "local" DNS name. Note that this _only_ works  # for targets which are names from DHCP or /etc/hosts. Give host  # "bert" another name, bertrand  #cname=bertand,bert    # For debugging purposes, log each DNS query as it passes through  # dnsmasq.  #log-queries    # Log lots of extra information about DHCP transactions.  #log-dhcp    # Include another lot of configuration options.  #conf-file=/etc/dnsmasq.more.conf  #conf-dir=/etc/dnsmasq.d    # Include all the files in a directory except those ending in .bak  #conf-dir=/etc/dnsmasq.d,.bak    # Include all files in a directory which end in .conf  #conf-dir=/etc/dnsmasq.d/,*.conf    # Include all files in /etc/dnsmasq.d except RPM backup files  conf-dir=/etc/dnsmasq.d,.rpmnew,.rpmsave,.rpmorig  

The resolv.conf files is as follows: (192.168.1.1 is the original DNS used by the router and I want to forward anything other than mysite.com back to it)

# Generated by NetworkManager  search ctc  nameserver 192.168.1.1  options single-request-reopen    

The hosts file is as follows:

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4  ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6    192.168.2.150 test.mysite.com  192.168.2.200 test2.mysite.com   

(There is an extra entry but I don't think it matters here)

I am pretty much stuck at this point. Any help would be appreciated.

Logrotate Fails Without Error

Posted: 30 Dec 2021 11:56 PM PST

Ok, I made a boo-boo. I think

Problem: Logrotate fails (or I incorrectly think its dead) and does not provide any error message to explain why:

● logrotate.service - Rotate log files     Loaded: loaded (/lib/systemd/system/logrotate.service; static; vendor preset: enabled)     Active: inactive (dead) since Fri 2021-12-31 13:05:25 CST; 42min ago       Docs: man:logrotate(8)             man:logrotate.conf(5)    Process: 27844 ExecStart=/usr/sbin/logrotate /etc/logrotate.conf (code=exited, status=0/SUCCESS)   Main PID: 27844 (code=exited, status=0/SUCCESS)    Dec 31 13:05:25 server1.example.com systemd[1]: Starting Rotate log files...  Dec 31 13:05:25 server1.example.com systemd[1]: logrotate.service: Succeeded.  Dec 31 13:05:25 server1.example.com systemd[1]: Started Rotate log files.  

I wanted to automatically restart logrotate using systemd because sometimes it would fail after a reboot. Therefore in my /usr/lib/systemd/system/logrotate.service file I added:

Restart=always

The above addition kill logrotate service. From there I decided to undo my dirty work and by deleting Restart=always and systemctl daemon-reload && systemctl start logrotate

No luck.

Then I decided to investigate the syslog and see if I could find any clues, using:
#grep "logrotate" /var/log/syslog. This yielded a clue:

Dec 31 00:00:03 server1 systemd[1]: logrotate.service: Succeeded.  Dec 31 00:36:16 server1 clamd[3544]: Fri Dec 31 00:36:16 2021 -> ^File path check failure on: /var/tmp/systemd-private-2f8e6be5a16040adb29706b9e31ae841-logrotate.service-DbrlAK  Dec 31 00:37:31 server1 systemd[1]: logrotate.service: Succeeded.  Dec 31 12:51:17 server1 systemd[1]: logrotate.service: Succeeded.  Dec 31 13:00:58 server1 systemd[1]: logrotate.service: Succeeded.  Dec 31 13:05:25 server1 systemd[1]: logrotate.service: Succeeded.  

Note: all the times where you see "Succeeded" are from me manually trying to start logrotate.

I read in this post on server fault that this can problem can be caused by logrotate trying to access logs outside of the var/log/ directory. And I though that this may be my problem, however I can't find any indicator of a log outside /var/log, except for the syslog error above:

reiteration:

Dec 31 00:36:16 server1 clamd[3544]: Fri Dec 31 00:36:16 2021 -> ^File path check failure on: /var/tmp/systemd-private-2f8e6be5a16040adb29706b9e31ae841-logrotate.service-DbrlAK  

From I investigated clamd, but

#grep "log" /etc/clamav/clamd.conf   LogSyslog false  LogFile /var/log/clamav/clamav.log  

Yields, nothing. Does anyone know why logrotate won't start?

Windows CLI way to copy to the same directory and only change the case of the filename?

Posted: 31 Dec 2021 01:54 AM PST

I have Windows 10 pro, with NTFS. I think the filesystem is fully case-sensitive. I can have the file Bill_and_Ted.txt in a directory, and write scripts that won't mistake it for bill_and_ted.txt. Linux WSL apps accessing NTFS directories are fully case-sensitive. But it seems that Windows utilities get confused.

So NTFS is probably case sensitive, but perhaps Windows is not. Is it possible in Windows to create two files in the same directory that only differ in ASCII case?

For various software development reasons, I would like to have the files Bill_and_Ted.txt and bill_and_ted.txt in the same directory, and then change the content. But so far, Powershell Copy-Item and Windows xcopy refuse to copy to the same directory when the filenames differ only in case. They fail with "File cannot be copied onto itself"

Is there a built-in Windows way to copy to the same directory and only change the case of the filename?

Centos server can ping IPs - but cannot ping domains

Posted: 31 Dec 2021 03:36 AM PST

I have 3 servers with DigitalOcean (AMS3). Suddenly, three servers at the same time faced the same issue. It seems the servers cannot connect to the outside world. I tried to ping different IP addresses and domains. Here are the results:

ping 8.8.8.8  PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.  64 bytes from 8.8.8.8: icmpseq=1 ttl=60 time=2.11 ms  64 bytes from 8.8.8.8: icmpseq=2 ttl=60 time=0.946 ms  64 bytes from 8.8.8.8: icmp_seq=3 ttl=60 time=0.724 ms  

ping google.com  ping: google.com: Name or service not known  

I searched for a solution on StackOverflow, ServerFault, and DO Community. There were suggestions that the file /etc/resolv.conf might have issues. Here is my /etc/resolv.conf file:

cat /etc/resolv.conf  ; Created by cloud-init on instance boot automatically, do not edit.  nameserver 8.8.8.8  nameserver 8.8.4.4  

The contents of other files you may want to see:

cat /etc/nsswitch.conf    passwd:     files sss  shadow:     files sss  group:      files sss  #initgroups: files sss    #hosts:     db files nisplus nis dns  hosts:      files dns myhostname    # Example - obey only what nisplus tells us...  #services:   nisplus [NOTFOUND=return] files  #networks:   nisplus [NOTFOUND=return] files  #protocols:  nisplus [NOTFOUND=return] files  #rpc:        nisplus [NOTFOUND=return] files  #ethers:     nisplus [NOTFOUND=return] files  #netmasks:   nisplus [NOTFOUND=return] files    bootparams: nisplus [NOTFOUND=return] files    ethers:     files  netmasks:   files  networks:   files  protocols:  files  rpc:        files  services:   files sss    netgroup:   nisplus sss    publickey:  nisplus    automount:  files nisplus sss  aliases:    files nisplus  

cat /etc/sysconfig/network-scripts/ifcfg-eth0    BOOTPROTO=none  DEFROUTE=yes  DEVICE=eth0  GATEWAY=174.138.0.1  HWADDR=16:68:53:c5:4e:5e  IPADDR=174.138.X.Y  IPADDR1=10.18.0.19  IPV6ADDR=2A03:B0C0:0002:00D0:0000:0000:X:Y/64  IPV6INIT=yes  IPV6_DEFAULTGW=2A03:B0C0:0002:00D0:0000:0000:0000:0001  MTU=1500  NETMASK=255.255.240.0  NETMASK1=255.255.0.0  ONBOOT=yes  TYPE=Ethernet  USERCTL=no  

dig google.com @8.8.8.8    ; <<>> DiG 9.11.4-P2-RedHat-9.11.4-26.P2.el7_9.4 <<>> google.com @8.8.8.8  ;; global options: +cmd  ;; connection timed out; no servers could be reached  

dig google.com @2001:4860:4860::8888    ; <<>> DiG 9.11.4-P2-RedHat-9.11.4-26.P2.el7_9.4 <<>> google.com @2001:4860:4860::8888  ;; global options: +cmd  ;; connection timed out; no servers could be reached  

cat /etc/hosts  # Do not remove the following line, or various programs  # that require network functionality will fail.  127.0.0.1 bizcloud-vds bizcloud-vds  127.0.0.1 localhost.localdomain localhost  127.0.0.1 localhost4.localdomain4 localhost4    ::1 bizcloud-vds bizcloud-vds  ::1 localhost.localdomain localhost  ::1 localhost6.localdomain6 localhost6    174.138.XXX.YYY           cm105srv.ABC.ir cm105srv  

Any help is appreciated.

My EC2 Ubuntu instance has not internet access?

Posted: 31 Dec 2021 01:24 AM PST

So im new to ec2 and aws , i created a account yesterday and opened a ubuntu instance, i can update and upgrade or install new packages but the problem comes when using a api that connects to a game api

https://gitlab.com/man90/black-desert-social-rest-api

I build and run api on the instance and runs normal.

Problem is when calling the api for some reason i get 404 not found response.

    Used configuration:      Proxies:        []      Port:           8001      Cache TTL:      180 minutes    2021/12/30 07:34:29 Listening for requests  

but when calling api from python i got 404 not found response, so ec2 instance cannot resolve domain or is unrecehable.

ubuntu@ip-XXX-XXX-XXX-XXXX:~/bdo/guild-scraping$ python3 gsheet.py  404 page not found  

if i ping the page from ec2 instance works as normal, but for some reason, running api and calling it from python is giving error

 ubuntu@ip-XXX-XXX-XXX-XXX:~/bdo/guild-scraping$ ping www.naeu.playblackdesert.com  PING ds7lduf.impervadns.net (45.223.17.187) 56(84) bytes of data.  64 bytes from 45.223.17.187 (45.223.17.187): icmp_seq=1 ttl=33 time=8.27 ms  64 bytes from 45.223.17.187 (45.223.17.187): icmp_seq=2 ttl=33 time=8.15 ms  64 bytes from 45.223.17.187 (45.223.17.187): icmp_seq=3 ttl=33 time=8.13 ms  64 bytes from 45.223.17.187 (45.223.17.187): icmp_seq=4 ttl=33 time=8.18 ms  64 bytes from 45.223.17.187 (45.223.17.187): icmp_seq=5 ttl=33 time=8.16 ms  

if i ping the api outside my code from python happens the same, so idk if my code cannot reach api for some reason or api itself cannot reach internet game website.

 import requests   r = requests.get("http://localhost:8001/v1/guild", params=payload)   print(r.text)  404 page not found  

Zabbix key with comma

Posted: 31 Dec 2021 04:23 AM PST

I'm trying to create Zabbix item with key which contains commas, it results with "Too many parameters." error. I don't see any opportunity to get rid off comma in my key. I've already tried many ways of enclosing key or parts of it in quotes, double quotes etc. but nothing worked for me. I don't want to use "Database monitor" item type, I'd like to stay with simple "Zabbix agent".

My key is

system.run[sqlcmd -S SERVERNAME-q "SELECT Count(Datediff(second, mail_tsinsert, mail_tsupdate)) FROM   TABLENAME WHERE Datediff(second, mail_tsinsert, mail_tsupdate) > 200"]  

of course I've changed the servername and tablename for the sake of the example, query works like a charm when executed in cmd.

Is there a way to escape zabbix item keys?

ERROR: (gcloud.app.deploy) Error Response: [13] Flex operation

Posted: 31 Dec 2021 01:00 AM PST

I already checked all the quotas and they seem to be fine. Don't know what cause the error?

Updating service [default] (this may take several minutes)...failed.     ERROR: (gcloud.app.deploy) Error Response: [13] Flex operation projects/objreg-278609/regions/us-central1/operations/214e2dcc-8a7a-4204-898a-580dc14e6a97 error [INTERNAL]: An internal error occurred while processing task /appengine-flex-v1/insert_flex_deployment/flex_create_resources>2020-05-28T10:58:17.771Z15266.ow.8: Deployment Manager operation objreg-278609/operation-1590663498298-5a6b334c5f340-589a82aa-ed20dd6f errors: [code: "RESOURCE_ERROR"  location: "/deployments/aef-default-20200528t054325/resources/aef-default-20200528t054325"  message: "{\"ResourceType\":\"compute.beta.regionAutoscaler\",\"ResourceErrorCode\":\"403\",\"ResourceErrorMessage\":{\"code\":403,\"errors\":[{\"domain\":\"usageLimits\",\"message\":\"Exceeded limit \'QUOTA_FOR_INSTANCES\' on resource \'aef-default-20200528t054325\'. Limit: 8.0\",\"reason\":\"limitExceeded\"}],\"message\":\"Exceeded limit \'QUOTA_FOR_INSTANCES\' on resource \'aef-default-20200528t054325\'. Limit: 8.0\",\"statusMessage\":\"Forbidden\",\"requestPath\":\"https://compute.googleapis.com/compute/beta/projects/objreg-278609/regions/us-central1/autoscalers\",\"httpMethod\":\"POST\"}}"  ]  

Please help me solve it.

Blocking phpmyadmin from internet, allow only from lan in nginx

Posted: 31 Dec 2021 05:38 AM PST

I'm running 2 websites on a LEMP stack with nginx configured as a reverse proxy server. I have successfully installed phpmyadmin in the root folder of one of my sites root directories. When I go to www.example.com/phpmyadmin, I am able to access phpmyadmin login page on public internet as well as on my lan. What I would like to do is configure nginx to block any traffic to phpmyadmin that doesn't originate from my local area network. Currently I also have a /admin folder in the root of my site, and I HAVE SUCCESSFULLY set up a way to block all traffic to that folder that doesn't originate from my LAN. I figured blocking phpmyadmin from the outside world would be as easy using the same ngninx virtual configuration lines I used to block the /admin/ directory, but just changing the location to /phpmyadmin. However, when doing this, phpmyadmin is still blocked on the local network.

Below is my nginx virtual host configuration for example.com. You can see what blocking configurations work and don't work as noted in the comments. Help me fix the #Not working lines. Note: My Server's local ip address is 192.168.1.20.

server {          listen 80;          listen [::]:80;          server_name example.com  www.example.com;          return 301 https://$host$request_uri;  }    server {          ####          # SSL configuration          ####            listen 443 ssl http2;          listen [::]:443 ssl http2;            ssl on;          ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;          ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;          ssl_session_timeout 1d;          ssl_session_cache shared:SSL:10m;          ssl_session_tickets off;            # Modern SSL Security rating          ssl_protocols TLSv1.2 TLSv1.3;          ssl_prefer_server_ciphers on;            # HSTS (ngx_http_headers_module is required) (63072000 seconds)          add_header Strict-Transport-Security "max-age=63072000; includeSubdomains" always;            # OCSP Stapling          ssl_stapling on;          ssl_stapling_verify on;            # verify chain of trust of OCSP response using Root CA and Intermediate certs          ssl_trusted_certificate /etc/letsencrypt/live/example.com/chain.pem;            # replace with the IP address of your resolver          resolver 1.1.1.1 1.0.0.1;            # Your website name goes here.          server_name example.com  www.example.com;          root /var/www/example.com;            # Error & Access Logs          error_log /var/www/example.com.logs/error.log error;          access_log /var/www/example.com.logs/access.log;            ## This should be in your http block and if it is, it's not needed here.          index index.php;            location ~ /.well-known {                  allow all;          }            location = /favicon.ico {                  log_not_found off;                  access_log off;          }            location = /robots.txt {                  allow all;                  log_not_found off;                  access_log off;          }            location / {          # try_files $uri $uri/ =404;          try_files $uri $uri/ /index.php?$args;          }            # Cache Static Files For As Long As Possible          location ~*          \.(ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|css|rss|atom|js|jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid|midi|wav|bmp|rtf)$                  {                  access_log off;                  log_not_found off;                  expires max;          }            # Security Settings For Better Privacy Deny Hidden Files          location ~ /\. {                  deny all;                  access_log off;                  log_not_found off;          }            # Disallow PHP In Upload Folder          location /wp-content/uploads/ {                  location ~ \.php$ {                          deny all;                  }          }          # LAN ONLY ACCESS WORKING          # Only allow access of /admin via LAN & block access from internet # WORKING          location ^~ /admin {                  allow 192.168.1.0/24;                  deny all;                  include snippets/fastcgi-php.conf;                  fastcgi_pass unix:/var/run/php/php7.3-fpm.sock;                  fastcgi_split_path_info ^(.+\.php)(/.+)$;          }          # LAN ONLY ACCESS NOT WORKING!!!          # Only allow access of /phpmyadmin from LAN & block access from internet # NOT WORKING          location ^~ /phpmyadmin {                  allow 192.168.1.0/24;                  deny all;                  include fastcgi.conf;                  fastcgi_intercept_errors on;                  fastcgi_pass local_php;                  fastcgi_buffers 16 16k;                  fastcgi_buffer_size 32k;          }          # LAN ONLY ACCESS WORKING          # Only allow access to wp-login page from LAN & block access from internet # WORKING          location ~ /wp-login.php {                  allow 192.168.1.0/24;                  deny all;                  include fastcgi.conf;                  fastcgi_intercept_errors on;                  fastcgi_pass local_php;                  fastcgi_buffers 16 16k;                  fastcgi_buffer_size 32k;          }            location ~ \.php$ {                  include fastcgi.conf;                  fastcgi_intercept_errors on;                  fastcgi_pass local_php;                  fastcgi_buffers 16 16k;                  fastcgi_buffer_size 32k;          }            location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {                  expires max;                  log_not_found off;          }  }  

What edits to my virtual host config file must I make to properly restrict phpmyadmin to my LAN in Nginx?

XRDP same user multiple session

Posted: 31 Dec 2021 01:00 AM PST

I'm trying to make XRDP work with multiple sessions on my linux mint server. Right now, i can connect only if there are no other session running on the system (I had to disable autologin).

I don't know why, but with the Raspberry it just works by default the way I want: when I connect to XRDP, a new session is created for every client. When another client connects to the same server with the same user, a new session is created.

I tried to change the Policy setting in the /etc/xrdp/sesman.ini file from Default to UBDC but nothing changed.

It's the first question I post, so I ask you to be really patient with me and ask me the files you may need to understand the situation.

I swear I searched all the internet but found nothing that helped. I just know it can be done 'cause my Raspberry does it for some odd reason.

Thank you :)

Poor write performance with HP ProLiant ML 150 Gen9

Posted: 31 Dec 2021 04:53 AM PST

Transferring large files from one drive (USB or SATA to RAID) in my HP ProLiant ML150 Gen9 is slow. At the beginning we were thinking about the B140i controller - a pseudo-raid controller without any memory cache.

This is the original B140i performance and the improvement after upgrading to smart array p440/4gbFWC.

B140i PERFORMANCE P440-4G PERFORMANCE

Raid configuration is RAID 10 with 4 x SSD 500GB drives on both cases.

Although improved, the problem was still present: When transferring large files, speed drops dramatically after a couple of minutes, from 400 MB /S and remains at 6-7 MB/s till the end of the transfer: SPEED DROP

I tried without success:

  • Clean install of Windows 2012R2

  • Clean install of Windows 2019

  • Upgraded all firmware and drivers of using the latest ProLiant Service Pack


This is perfomance while copying a file from P440/4GB volume to the same volume:

enter image description here

Now machine is running 3 VM with only 18% of free memory. Older tests was done without any VM running.

Azure AD SSO for non-azure Linux VMs?

Posted: 31 Dec 2021 01:08 AM PST

I currently have a VPS hosting for two servers with Ubuntu outside Azure network and a free azure AD plan. I see this option here:

https://docs.microsoft.com/en-us/azure/virtual-machines/linux/login-using-aad

but it is only for azure VMs, so can I use azure ad for hosts outside of azure?

How to trace cron actions?

Posted: 31 Dec 2021 12:46 AM PST

I know that there are some cron jobs (run every minute) scheduled in my Ubuntu.

How do I track what's running them, when the cron files (sudo su; crontab -e) are empty?

Getting emails with @pps.reinject in the CC recipients

Posted: 31 Dec 2021 04:04 AM PST

This is an example:

Remy Blättler (Supertext AG)@pps.reinject <=?UTF-8?Q?Remy_Bl=C3=A4ttler_=28Supertext_AG=29?=@pps.reinject>  

We are using Office365 to send out our email, but this is usually when we get emails back from clients. And this is not only for one client. We use Outlook 2016 on the Desktop.

From a Google search I found Proofpoint Protection Server. But that does not really explain much...

Any idea what could be wrong? And on which side?

Remote Desktop to 80% of my servers do no longer work ("User account restriction") from just one of my PCs

Posted: 31 Dec 2021 05:05 AM PST

I came into work last week, checked my first ticket (easy to fix one), RDP'd into the server needed for this and the login did not work. After clicking 'connect' I got the "Unable to Log You on Because of an Account Restriction" message. Checked another server (all machines are 2008R2/2012R2), the same message. No, I do not habe an empty password, not using network auth, my clint is Windows 10 (1607).

Here is what I did:

  • Used another client (Win10.1607), same ou, same setup. Can perfectly login from anywhere to anywhere (so I am asuming it's no my user account or a GPO)
  • Checked servers: I can RDP into all my DC's and a few other machines (2008R2/2012R2), looks random to me (all server in the same OU, no special software installed)
  • Deleted the mstsc cache (%appdata%..\local\Microsoft\Terminal Server Client* )
  • Cleaned up HCU\SOFTWARE\Microsoft\Terminal Server Client
  • Watched the eventlogs: nothing. Absolutely nothing. So I assume it's my client, not the servers. But I can RDP into all my servers at home and in another (customers) network ...
  • Checked date/time on client/server (0.0002ms apart)
  • Checked account restrictions on ma account (neither time nor machine restrictions are present)
  • Checked if logon at the console works (vm/ilo): works perfectly fine with my credentials
  • Checked if Share-Access would work (\\server\share): Does not work, I am seeing the same error message. Works from clientB, but not from alientA.
  • When doing the same thing from one of the 'working' machines (sever or client), everything is fine.

Any Ideas where to look for this? It is haunting me into my sleep :-(

Updates: Surely I checked the local policies on the server(s). any changes would have surprised me - there are a lot of servers. Also checked the clients GPO, nothing.

No Response on NGINX when using upstream

Posted: 31 Dec 2021 05:05 AM PST

I'm trying to load balance a web application through nginx, It works fine for all will my web application calls a service with sub-path.

for example it works

http://example.com/luna/   

but not for

 http://example.com/luna/sales  

My nginx.conf

user  nobody;  worker_processes  auto;    events {      worker_connections  1024;  }    http {      include       mime.types;      default_type  application/octet-stream;        sendfile        on;      keepalive_timeout  65;         map $http_upgrade $connection_upgrade {          default upgrade;          '' close;      }        upstream lunaups {          server myhostserver1.com:8080;          server myhostserver2.com:8080;      }          server {          listen       80;          server_name  example.com;            proxy_pass_header Server;            location = / {               rewrite ^ http://example.com/luna redirect;           }            location /luna {              rewrite ^$/luna/(.*)/^ /$1 redirect;              proxy_pass http://lunaups;              #add_header  X-Upstream  $upstream_addr;          }            error_page   500 502 503 504  /50x.html;          location = /50x.html {              root   html;          }      }  }  

my web application calls a service with additional subpath like /luna/sales fails to return response. What am i missing here?

It works if i remove one of my host server from upstream, But when I add second host on upstream it fails to return response.

Is my rewrite rule wrong or my configurations as whole is wrong?

Web app running on tomcat not updating when modified

Posted: 31 Dec 2021 12:02 AM PST

I'm modifiying a web app coded by another guy with AngularJS. This app is fed by csv data files and is running fine in the first place. However, when I'm trying to change some data in the csv files, every part of the app that relies on data taken from those .csv gets broken.

I first suspected this problem to be related to the fact Excel was recognizing the .csv files as SYLK files when I tried to modify them. However, when I tried to replace the new .csv by the old ones, it didn't change anything. Even more, removing the whole app overall and putting the old one in place instead didn't change anything to the problem.

So now, I'm suspecting there is some cache problem with the Tomcat server (8.0 under windows) I'm running the app on. I tried deleting the localhost folder in work/Catalina from the Tomcat installation folder as suggested in another question on Serverfault, but it doesn't change anything either (neither Under IE, nor Chrome). The only way I can go back to a working app is reboot my computer, but obviously I don't want to reboot each time I'm doing a modification.

Any idea to what could be causing the problem?

Is there a way to limit bandwidth per ip using HTB + a CIDR range in Linux?

Posted: 31 Dec 2021 12:02 AM PST

I can create rules to limit a entire subnet or to limit individual ip addresses with tc and htb. I am looking to use CIDR ranges to keep things somewhat elegant.

The machines in question are all running CentOS 7. I have been attempting to use tc + htb to accomplish this, but I am open to other tools if there is a better method.

My goal is to limit by a CIDR range and assign individual limits per source ip address.

For example, set global limit for 192.168.1.0/24 to 100Mb/s and each source ip within 192.168.1.0/24 has a individual upload limit of 10Mb/s that may not be exceeded.

Here is a working example of what I am doing for each ip(looking to simplify procedure if possible):

These steps only need to be performed once:

Create initial HTB qdisc:  # tc qdisc add dev eth0 root handle 1: htb default 12           Create root class:  # tc class add dev eth0 parent 1: classid 1:1 htb rate 100mbit ceil 100mbit    These steps must be performed for each IP in the CIDR range using current method (what I am looking to hopefully improve):    A class must be added for each source ip:  # tc class add dev eth0 parent 1:1 classid 1:10 htb rate 10mbit ceil 100mbit  # tc class add dev eth0 parent 1:1 classid 1:11 htb rate 10mbit ceil 100mbit  # tc class add dev eth0 parent 1:1 classid 1:12 htb rate 10mbit ceil 100mbit    A filter must be created for each source ip:  # tc filter add dev eth0 protocol ip parent 1:0 prio 1 u32 match ip src 192.168.1.2 flowid 1:10  # tc filter add dev eth0 protocol ip parent 1:0 prio 1 u32 match ip src 192.168.1.3 flowid 1:11  

It may be that there is no elegant way to do this, but any tips / advice would be greatly appreciated. I have looked through several guides online such as http://lartc.org. Thank you.

Revoke multiple client certs signed by one CA: only the first one got denied?

Posted: 31 Dec 2021 04:04 AM PST

  • OS: Ubuntu 12.04
  • OpenVPN version: 2.2.1-8
  • Setup: one CA cert, one server cert, multiple client certs

Server config:

port 1194  proto udp  dev tun  keepalive 10 120  comp-lzo  user nobody  group nogroup  persist-key  persist-tun  status /var/log/openvpn/team.log  syslog vpn-team  verb 4  writepid /var/run/openvpn-team.pid  ca /etc/openvpn/ca.crt  cert /etc/openvpn/team/server.crt  key /etc/openvpn/team/server.key  # This file should be kept secret  dh /etc/openvpn/dh.pem  server 172.16.255.128 255.255.255.128  ifconfig-pool-persist /etc/openvpn/team/ipp.txt  client-to-client  push "route 172.16.0.0 255.255.254.0"  crl-verify crl.pem  

client config:

dev tun  proto udp  resolv-retry infinite  nobind  user nobody  group nogroup  persist-key  persist-tun  comp-lzo  verb 4  client  remote x.x.x.x 1194  ca ca.crt  cert team.crt  key team.key  remote-cert-tls server  

Using revoke-full script from easy-rsa package, I saw that it only output the last one into a crl.pem file:

# generate a new CRL -- try to be compatible with  # intermediate PKIs  $OPENSSL ca -gencrl -out "$CRL" -config "$KEY_CONFIG"  

In my case, I wrote a script to append to that file but only the first one got denied, all other one can still connect.

Using openssl crl, it just show the serial of the first one:

Revoked Certificates:      Serial Number: E9955907C7F48BDDFCADCFECFAEDC8B7          Revocation Date: Feb 11 08:57:19 2015 GMT  

So, the question is: does crl-verify support a concatenated CRL file? Is it a limit of openssl?

Related:

Windows Server 2012 RD Licensing Issuing Multiple Temp Licenses Per Machine

Posted: 31 Dec 2021 02:08 AM PST

I have just setup a Windows Server 2012 RDS environment with Per Device CALs. Looking at RD Licensing Manager it is handing out multiple temp CALS per machine and multiple permanent CALS per machine. At this rate I will run out of licenses very shortly.

I understand that it would issue a temp license until the second logon, but why would it issue multiple licenses to the same machine?
How can I get a better breakdown of the differences in the issued CALS than RD Licensing Manager?
Any PowerShell commands to find out more information?

Is STARTTLS less safe than TLS/SSL?

Posted: 31 Dec 2021 02:32 AM PST

In Thunderbird (and I assume in many other clients, too) I have the option to choose between "SSL/TLS" and "STARTTLS".

As far as I understand it, "STARTTLS" means in simple words "encrypt if both ends support TLS, otherwise don't encrypt the transfer". And "SSL/TLS" means in simple words "always encrypt or don't connect at all". Is this correct?

Or in other words:

Is STARTTLS less secure than SSL/TLS, because it can fallback to plaintext without notifying me?

persistent SSH connection while connecting to VPN

Posted: 31 Dec 2021 03:42 AM PST

I have a Linux machine on the intranet which I can only access via SSH, this machine needs to connect to a VPN using openconnect however when I do that I get disconnected from the SSH since the intranet's IP is no longer valid.

I can reconnect to it from within the VPN using the IP it got assigned but that IP changes everytime the VPN is connected, I don't have control over any othe networks only this machine.

is there a way to keep the SSH connection alive while connecting to the VPN? thanks.


openconnect requires a --script argument which takes a script to configure routing, without it the connection succeeds but no names are resolved and the intranet's IP remains valid.

I'm currently using Ubuntu's default /etc/vpnc/vpnc-script (pasted here) I'm good with shell scripting but I know very little about networking, if I have to modify that I'll need some reference about what or how to change it.

Enabling DSA key authentification for SFTP while still keeping password login as optional (Ubuntu 12.04)

Posted: 31 Dec 2021 02:08 AM PST

I have a server running Ubuntu 12.04 Server. I want to be able to use SFTP on the command line with a DSA key, so I don't have to type the password into the terminal. Is this possible to do on the same server... i.e I want to SFTP to localhost (to test some PHP code before running it live). But I still want to allow password login by other clients if they want to. I don't want the certificate to be forced, but I don't want it to ask for the password if the certificate is passed or whatever.

I have the following options enabled in ssh_config:

RSAAuthentication yes  PasswordAuthentication yes  PubkeyAuthentication yes  IdentityFile ~/.ssh/id_dsa  

The following files with shown permissions are in /root/.ssh/

-rw-r--r--  1 root root  668 Apr 10 11:06 authorized_keys  -rw-------  1 root root  668 Apr 10 11:03 id_dsa  -rw-r--r--  1 root root  608 Apr 10 11:03 id_dsa.pub  

I copied the key into authorized keys with:

cat /root/.ssh/id_dsa.pub >> /root/.ssh/authorized_keys  

And when I cat authorized keys, it has added the key.

So, when I try to connect to sftp with sftp -v root@testserver (just locally, again, for testing some code but that's irrelevant), I still get the password prompt. Here's a section of the verbose output:

debug1: Authentications that can continue: publickey,password  debug1: Next authentication method: publickey  debug1: Trying private key: /root/.ssh/id_rsa  debug1: Offering DSA public key: /root/.ssh/id_dsa  debug1: Authentications that can continue: publickey,password  debug1: Trying private key: /root/.ssh/id_ecdsa  debug1: Next authentication method: password  root@testserver's password:  

Have I missed something obvious? Or will it not work connecting locally?

Thanks

No comments:

Post a Comment