Tuesday, March 30, 2021

Recent Questions - Server Fault

Recent Questions - Server Fault


query user with powershell, output to email

Posted: 30 Mar 2021 10:03 PM PDT

I can't figure out what I am doing wrong here in powershell. I am trying to email myself logged in user accounts on user change but I can't get it to work properly, it just gives me garbled unformatted messages. It works to a point but I would like it formatted instead of emailing me a single line.

Is this possible?

$users = query user /server:localhost  $SmtpServer = 'xx.xx.xx.xx' # SMTP Server name  $Port = 25                 # SMTP server port number – default is 25  $From = 'no-reply-server@xx.xx.xx.xx'  # from address - doesn't have to be valid, just in the format user@domain.ext and something your mail filter will not block  $To = 'xxxx@gmail.com'    # email address to send test to   $Subject = 'xxx.xx.xx.xx - Server login detected of account.'    # email subject  $Body = 'This is an automated alert to notify you that there has been a change in user status:    '+$users+'    '     # email body    Send-MailMessage -SmtpServer $SmtpServer -Port $Port -From $From -To $To -Subject $Subject -Body $Body ```    I get the email as follows:     USERNAME SESSIONNAME ID STATE IDLE TIME LOGON TIME accountname rdp-tcp#39 3 Active . 3/30/2021 5:54 PM accountname2 rdp-tcp#94 9 Active 29 3/30/2021 9:01 PM   

How to share unix domain socket between containers without named volume?

Posted: 30 Mar 2021 08:57 PM PDT

I have two containers A and B which needs to talk via unix domain socket created by A in /var/run/notif.sock.

I can't use named volume as sharing /var/run between containers is risky as it contains container's runtime data.

Bind mount requires the host to already have a file with same name.

Problem with bind mount is if I create a host file with same name as the socket file, and use bind mount like this: --v /var/run/notif.sock : /var/run/notif.sock , it would create a plain file /var/run/notif.sock automatically by docker inside container A and when the actual process inside container A tries to open the socket, socket file creation would fail inside the container with error "address already in use" as a file with same name is already present.

Is there any other way to go about this?

Can I setup DNS in such a way that if one IP address fails, it uses the other IP?

Posted: 30 Mar 2021 09:36 PM PDT

I intend to create a SaaS with two Load Balancers. If one load balancer goes down, 'MySaaSApp.com' will point to the other load balancer. Can such be accomplished through DNS Records alone? Thanks!

Patterns or templates in systemd scopes?

Posted: 30 Mar 2021 08:33 PM PDT

I'm working with an external service that creates its own systemd scopes like system.slice/someapp.slice/instance-400.scope or user.slice/user-1000.slice/app-1234.scope.

The scope names embed the pid, so they are dynamic. The Id and Name both match the unit name exactly.

Does systemd offer any way to write a .scope unit file that will apply [Scope] properties to all instances of the scope by name pattern matching it, or is this something that can only really be done with the cooperation of the launching app?

Details/related at: Applying systemd control group resource limits automatically to specific user applications in a gnome-shell session

Applying systemd control group resource limits automatically to specific user applications in a gnome-shell session

Posted: 30 Mar 2021 08:50 PM PDT

Having seen that GNOME now launches apps under systemd scopes I've been looking at a way to get systemd to apply some cgroup resource and memory limits to my browser.

I want to apply a MemoryMax and CPUShare to all app-gnome-firefox-*.scope instances per systemd.resource-control.

But GNOME isn't launching firefox with the instantiated unit format app-gnome-firefox-@.scope so I don't know how to make a systemd unit file that will apply automatically to all app-gnome-firefox-*.scope instances.

I can manually apply the resource limits to an instance with systemctl set-property --user app-gnome-firefox-92450.scope (for example) once the unit starts, but that's a pain.

Is there any way to inject properties for transient scopes with pattern matching for names?

This isn't really gnome-shell specific; it applies just as well to a user terminal session that invokes a command with systemd-run --user --scope.

Details

Firefox is definitely launched under a systemd scope, and it gets its own cgroup:

$ systemctl --user status app-gnome-firefox-92450.scope  ● app-gnome-firefox-92450.scope - Application launched by gnome-shell       Loaded: loaded (/run/user/1000/systemd/transient/app-gnome-firefox-92450.scope; transient)    Transient: yes       Active: active (running) since Wed 2021-03-31 09:44:30 AWST; 32min ago        Tasks: 567 (limit: 38071)       Memory: 2.1G          CPU: 5min 39.138s       CGroup: /user.slice/user-1000.slice/user@1000.service/app-gnome-firefox-92450.scope               ├─92450 /usr/lib64/firefox/firefox               ....    ....  

Verified by

$ systemd-cgls --user-unit app-gnome-firefox-92450.scope  Unit app-gnome-firefox-92450.scope (/user.slice/user-1000.slice/user@1000.service/app-gnome-firefox-92450.scope):  ├─92450 /usr/lib64/firefox/firefox  ...  

and

$ ls -d /sys/fs/cgroup/user.slice/user-1000.slice/user@1000.service/app-gnome-firefox-*  /sys/fs/cgroup/user.slice/user-1000.slice/user@1000.service/app-gnome-firefox-92450.scope  

I can apply a MemoryMax (cgroup v2 constraint memory.max) to an already-running instance with systemctl set-property and it takes effect:

$ systemctl set-property --user app-gnome-firefox-98883.scope MemoryMax=5G  $ systemctl show --user app-gnome-firefox-98883.scope |grep ^MemoryMax  MemoryMax=5368709120  $ cat /sys/fs/cgroup/user.slice/user-1000.slice/user@1000.service/app-gnome-firefox-*/memory.max  5368709120  

It definitely takes effect - setting a low MemoryMax like 100M causes the firefox scope to OOM, as seen in journalctl --user -u app-gnome-firefox-98883.scope.

The trouble is that I can't work out how to apply systemd.resource-control rules automatically for new instances of the app automatically.

I've tried creating a .config/systemd/user/app-gnome-firefox-@.scope containing

[Scope]  MemoryMax = 5G  

but it appears to have no effect.

systemd-analyze verify chokes on it rather unhelpfully:

$ systemd-analyze  verify --user .config/systemd/user/app-gnome-firefox-@.scope   Failed to load unit file /home/craig/.config/systemd/user/app-gnome-firefox-@i.scope: Invalid argument  

If I use systemctl set-property --user app-gnome-firefox-92450.scope on a running instance and systemctl --user show app-gnome-firefox-92450.scope I see the drop-in files at:

FragmentPath=/run/user/1000/systemd/transient/app-gnome-firefox-98883.scope  DropInPaths=/run/user/1000/systemd/transient/app-gnome-firefox-98883.scope.d/50-MemoryMax.conf  

It has Names containing the pid, so that can't be matched easily:

Id=app-gnome-firefox-98883.scope  Names=app-gnome-firefox-98883.scope  

and I'm kind of stumped. Advice would be greatly appreciated, hopefully not "gnome-shell is doing it wrong, patch it" advice. Some draft systemd docs suggest it's using one of the accepted patterns.

Workaround 1 - systemd-run

The only workaround I see so far is to launch the firefox instance with systemd-run myself:

systemd-run --user --scope -u firefox.scope -p 'MemoryMax=5G' -p 'CPUQuota=80%' /usr/lib64/firefox/firefox  

and let that be the control process. But it looks like this isolates the firefox control channel in some manner that prevents firefox processes launched by other apps or the desktop session from then talking to the cgroup-scoped firefox, resulting in

Firefox is already running, but is not responding. To use Firefox, you must first close the existing Firefox process, restart your device, or use a different profile.

Edit: firefox remoting when launched manually via systemd-run is fixed by setting MOZ_DBUS_REMOTE in the environment both for my user session and as a -E MOZ_DBUS_REMOTE=1 option to systemd-run. It's probably because I'm using Wayland. A colleague reported that using XOrg and an older system it only worked for them without MOZ_DBUS_REMOTE=1.

Workaround 2 - as a user service

I landed up defining a systemd service for firefox instead.

$ systemctl --user edit --full --force firefox.service  
[Unit]  Description=Run Firefox    [Service]  ExecStart=/usr/lib64/firefox/firefox  Environment=MOZ_DBUS_REMOTE=1  MemoryMax = 5G  CPUQuota=400%    [Install]  WantedBy=gnome-session-initialized.target  
systemctl --user  enable firefox.service  

This starts firefox on login with the desired cgroups configured etc. New firefox commands will open tabs in the autostarted instance. I guess that'll do for now.

Better options?

Still a clumsy workaround - it should surely be possible to apply resource control rules to slices via .config/systemd/user ?

How do browsers cache PAC files?

Posted: 30 Mar 2021 06:22 PM PDT

How do browsers and other applications cache PAC files? If I'm serving a PAC file and update the configuration (e.g. add or remove a domain that should go through the proxy), when will different browsers recognize the updated configuration? Is there anything I can do on the PAC file server to flush the cache?

How long do PAC files stay in a browser cache? What will trigger the cache to be cleared?

I have found this https://support.microsoft.com/en-us/topic/how-to-disable-automatic-proxy-caching-in-internet-explorer-92735c9c-8a26-d0d8-7f8a-1b46595cbaba for IE, but not finding any information for Chrome/Firefox/Edge

how to center my cards?

Posted: 30 Mar 2021 06:06 PM PDT

I will want to center my 6 cards, 3 at the top and 3 at the top, can you help me?

[image][1]

    <div class="left">        <div class="property-card">          <a href=""></a>            <div class="property-image">              <div class="property-image-title">                <h5>Responsive Design</h5>              </div>            </div></a>          <div class="property-description">            <h5> Responsive Design </h5>            <p>L'adaptation multi-supports des sites web : un nouveau besoin, une autre façon de concevoir la structure d'un site</p>          </div>          </a>        </div>      </div>          <div class="center">        <div class="property-card">          <a href=""></a>            <div class="property-image">              <div class="property-image-title">                <h5>Responsive Design</h5>              </div>            </div></a>          <div class="property-description">            <h5> Responsive Design </h5>            <p>L'adaptation multi-supports des sites web : un nouveau besoin, une autre façon de concevoir la structure d'un site</p>          </div>          </a>        </div>      </div>          <div class="right">        <div class="property-card">          <a href=""></a>            <div class="property-image">              <div class="property-image-title">                <h5>Responsive Design</h5>              </div>            </div></a>          <div class="property-description">            <h5> Responsive Design </h5>            <p>L'adaptation multi-supports des sites web : un nouveau besoin, une autre façon de concevoir la structure d'un site</p>          </div>          </a>        </div>      </div>      <div class="test">      <div class="left">        <div class="property-card">          <a href=""></a>            <div class="property-image">              <div class="property-image-title">                <h5>Responsive Design</h5>              </div>            </div></a>          <div class="property-description">            <h5> Responsive Design </h5>            <p>L'adaptation multi-supports des sites web : un nouveau besoin, une autre façon de concevoir la structure d'un site</p>          </div>          </a>        </div>      </div>          <div class="center">        <div class="property-card">          <a href=""></a>            <div class="property-image">              <div class="property-image-title">                <h5>Responsive Design</h5>              </div>            </div></a>          <div class="property-description">            <h5> Responsive Design </h5>            <p>L'adaptation multi-supports des sites web : un nouveau besoin, une autre façon de concevoir la structure d'un site</p>          </div>          </a>        </div>      </div>          <div class="right">        <div class="property-card">          <a href=""></a>            <div class="property-image">              <div class="property-image-title">                <h5>Responsive Design</h5>              </div>            </div></a>          <div class="property-description">            <h5> Responsive Design </h5>            <p>L'adaptation multi-supports des sites web : un nouveau besoin, une autre façon de concevoir la structure d'un site</p>          </div>          </a>        </div>      </div>        </div> ```        [1]: https://i.stack.imgur.com/PB6XN.png  

No certificate templates in Certificate Services server on Windows 2019

Posted: 30 Mar 2021 05:58 PM PDT

There are a lot of discussions on here about this problem, and I have spent all day exploring every one of them. All the servers involved in this are Windows 2019. 2 domain controllers and a certificate authority server. After installing the Certificate Services feature, I then also installed the Web Enrollment stuff. But when I go to my server's enrollment site, I am greeted with this error:

enter image description here

I've chased down several options. I validated permissions via: https://www.altaro.com/hyper-v/windows-ssl-certificate-templates/

I verified LDAP object configurations via: https://docs.microsoft.com/en-US/troubleshoot/windows-server/windows-security/no-certificate-templates-be-found

I verified the configuration from this site: https://docs.microsoft.com/en-us/answers/questions/96739/certificate-template-not-showing.html

I'm running out of options. The template is clearly there: enter image description here

I'm completely out of ideas. I'm frustrated by this. Has anyone got any other ideas?

L2TP / IPSec Public IP Address

Posted: 30 Mar 2021 05:03 PM PDT

I currently have a server set up (Ubuntu) of which i'm using as a VPN server with L2TP and IPSec.

However when connecting to this, it is providing a LAN IP address (server side). I would like the server to issue the public IP address to the client.

How would I achieve this? Apologies, i'm totally new to this but interested in learning.

L2TPD.conf currently looks like this:

[global]  port = 1701    [lns default]  ip range = 192.168.42.10-192.168.42.250  local ip = 192.168.42.1  require chap = yes  refuse pap = yes  require authentication = yes  name = l2tpd  pppoptfile = /etc/ppp/options.xl2tpd  length bit = yes  

The client is being issued with a LAN IP of 192.168.42.10.

Thanks in advance!

How to run a bash command on all Cloud Foundry Auto-scaled Linux VMs at launch time

Posted: 30 Mar 2021 06:17 PM PDT

I have an application hosted on Cloud Foundry (Open Source, not Pivotal Cloud Foundry). The instances for this application are autoscaled, meaning any time a new instance can get launched, and any time an existing instance can get terminated. The OS is Ubuntu 18.04.3 LTS.

I would like to run the following bash script line at the time of a new instance is launched. It basically downloads and installs the DATADOG Linux agent on the host in order for the host to send its metrics to DATADOG.

DD_AGENT_MAJOR_VERSION=7 DD_API_KEY=111aaa........999bbb DD_SITE="datadoghq.com" bash -c "$(curl -L https://s3.amazonaws.com/dd-agent/scripts/install_script.sh)"  

How can I do this with a minimal effort?

Iptables block access docker container from host

Posted: 30 Mar 2021 06:46 PM PDT

I have iptables rules that blocking access to DOCKER Container from host (accessing from outside network is working fine), most of these rules is writen by my ex-coworking so basically i have no experience on writing iptables rules

could someone help me with some advice of which line of the rules should I edit/remove/add so I can simply CURL my DOCKER Container from host

here is my iptables rules

-P INPUT DROP  -P FORWARD DROP  -P OUTPUT ACCEPT  -N DOCKER  -N DOCKER-ISOLATION-STAGE-1  -N DOCKER-ISOLATION-STAGE-2  -N DOCKER-USER  -N cphulk  -N dynamic  -N loc-fw  -N loc_frwd  -N logdrop  -N logflags  -N logreject  -N net-fw  -N net-loc  -N net_frwd  -N reject  -N sha-lh-f039fe5b47b48a558b61  -N sha-rh-5f1a9db64e7d114e7d5b  -N shorewall  -N smurflog  -N smurfs  -N tcpflags  -A INPUT -j cphulk  -A INPUT -i eth0 -j net-fw  -A INPUT -i eth1 -j loc-fw  -A INPUT -i lo -j ACCEPT  -A INPUT -m addrtype --dst-type BROADCAST -j DROP  -A INPUT -m addrtype --dst-type ANYCAST -j DROP  -A INPUT -m addrtype --dst-type MULTICAST -j DROP  -A INPUT -m hashlimit --hashlimit-upto 1/sec --hashlimit-burst 10 --hashlimit-mode srcip --hashlimit-name lograte -j LOG --log-prefix "INPUT REJECT " --log-level 6  -A INPUT -g reject  -A FORWARD -j DOCKER-USER  -A FORWARD -j DOCKER-ISOLATION-STAGE-1  -A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT  -A FORWARD -o docker0 -j DOCKER  -A FORWARD -i docker0 ! -o docker0 -j ACCEPT  -A FORWARD -i docker0 -o docker0 -j ACCEPT  -A FORWARD -o br-d7d9cacee34d -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT  -A FORWARD -o br-d7d9cacee34d -j DOCKER  -A FORWARD -i br-d7d9cacee34d ! -o br-d7d9cacee34d -j ACCEPT  -A FORWARD -i br-d7d9cacee34d -o br-d7d9cacee34d -j ACCEPT  -A FORWARD -o br-72d36b8824e3 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT  -A FORWARD -o br-72d36b8824e3 -j DOCKER  -A FORWARD -i br-72d36b8824e3 ! -o br-72d36b8824e3 -j ACCEPT  -A FORWARD -i br-72d36b8824e3 -o br-72d36b8824e3 -j ACCEPT  -A FORWARD -i eth0 -j net_frwd  -A FORWARD -i eth1 -j loc_frwd  -A FORWARD -m addrtype --dst-type BROADCAST -j DROP  -A FORWARD -m addrtype --dst-type ANYCAST -j DROP  -A FORWARD -m addrtype --dst-type MULTICAST -j DROP  -A FORWARD -m hashlimit --hashlimit-upto 1/sec --hashlimit-burst 10 --hashlimit-mode srcip --hashlimit-name lograte -j LOG --log-prefix "FORWARD REJECT " --log-level 6  -A FORWARD -g reject  -A DOCKER -d 172.17.0.2/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 1337 -j ACCEPT  -A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2  -A DOCKER-ISOLATION-STAGE-1 -i br-d7d9cacee34d ! -o br-d7d9cacee34d -j DOCKER-ISOLATION-STAGE-2  -A DOCKER-ISOLATION-STAGE-1 -i br-72d36b8824e3 ! -o br-72d36b8824e3 -j DOCKER-ISOLATION-STAGE-2  -A DOCKER-ISOLATION-STAGE-1 -j RETURN  -A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP  -A DOCKER-ISOLATION-STAGE-2 -o br-d7d9cacee34d -j DROP  -A DOCKER-ISOLATION-STAGE-2 -o br-72d36b8824e3 -j DROP  -A DOCKER-ISOLATION-STAGE-2 -j RETURN  -A DOCKER-USER -j RETURN  -A cphulk -s xxxxxxx/32 -m state --state NEW -m time --datestop 2021-03-30T21:20:09 -j DROP  -A cphulk -s xxxxxxx/32 -m state --state NEW -m time --datestop 2021-03-30T21:39:50 -j DROP  -A cphulk -s xxxxxxx/32 -m state --state NEW -m time --datestop 2021-03-30T22:04:17 -j DROP  -A cphulk -s xxxxxxx/32 -m state --state NEW -m time --datestop 2021-03-30T22:04:18 -j DROP  -A cphulk -s xxxxxxx/32 -m state --state NEW -m time --datestop 2021-03-30T22:13:35 -j DROP  -A cphulk -s xxxxxxx/32 -m state --state NEW -m time --datestop 2021-03-30T23:25:36 -j DROP  -A cphulk -s xxxxxxx/32 -m state --state NEW -m time --datestop 2021-03-31T02:26:53 -j DROP  -A cphulk -s xxxxxxx/32 -m state --state NEW -m time --datestop 2021-03-31T02:26:54 -j DROP  -A cphulk -s xxxxxxx/32 -m state --state NEW -m time --datestop 2021-03-31T03:21:43 -j DROP  -A cphulk -s xxxxxxx/32 -m state --state NEW -m time --datestop 2021-03-31T07:59:55 -j DROP  -A cphulk -s xxxxxxx/32 -m state --state NEW -m time --datestop 2021-03-31T15:33:49 -j DROP  -A cphulk -s xxxxxxx/32 -m state --state NEW -m time --datestop 2021-03-31T16:09:47 -j DROP  -A loc-fw -j dynamic  -A loc-fw -m conntrack --ctstate INVALID,NEW,UNTRACKED -j smurfs  -A loc-fw -p tcp -j tcpflags  -A loc-fw -j ACCEPT  -A loc_frwd -j dynamic  -A loc_frwd -m conntrack --ctstate INVALID,NEW,UNTRACKED -j smurfs  -A loc_frwd -p tcp -j tcpflags  -A loc_frwd -o eth0 -j ACCEPT  -A logdrop -j DROP  -A logflags -m hashlimit --hashlimit-upto 1/sec --hashlimit-burst 10 --hashlimit-mode srcip --hashlimit-name lograte -j LOG --log-prefix "logflags DROP " --log-level 6 --log-ip-options  -A logflags -j DROP  -A logreject -j reject  -A net-fw -j dynamic  -A net-fw -m conntrack --ctstate INVALID,NEW,UNTRACKED -j smurfs  -A net-fw -p udp -m udp --dport 67:68 -j ACCEPT  -A net-fw -p tcp -j tcpflags  -A net-fw -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT  -A net-fw -p icmp -m icmp --icmp-type 8 -j ACCEPT  -A net-fw -p tcp -m multiport --dports 22,53,80,443,10000,3306,5500,2087,2083,21,110,995,993,25,465 -j ACCEPT  -A net-fw -p tcp -m multiport --dports 587,2096,5432,8080 -j ACCEPT  -A net-fw -p tcp -m multiport --dports 8181 -j ACCEPT  -A net-fw -p udp -m udp --dport 53 -j ACCEPT  -A net-fw -m addrtype --dst-type BROADCAST -j DROP  -A net-fw -m addrtype --dst-type ANYCAST -j DROP  -A net-fw -m addrtype --dst-type MULTICAST -j DROP  -A net-fw -m hashlimit --hashlimit-upto 1/sec --hashlimit-burst 10 --hashlimit-mode srcip --hashlimit-name lograte -j LOG --log-prefix "net-fw DROP " --log-level 6  -A net-fw -j DROP  -A net-loc -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT  -A net-loc -p icmp -m icmp --icmp-type 8 -j ACCEPT  -A net-loc -m addrtype --dst-type BROADCAST -j DROP  -A net-loc -m addrtype --dst-type ANYCAST -j DROP  -A net-loc -m addrtype --dst-type MULTICAST -j DROP  -A net-loc -m hashlimit --hashlimit-upto 1/sec --hashlimit-burst 10 --hashlimit-mode srcip --hashlimit-name lograte -j LOG --log-prefix "net-loc DROP " --log-level 6  -A net-loc -j DROPn  -A reject -m addrtype --src-type BROADCAST -j DROP  -A reject -s 224.0.0.0/4 -j DROP  -A reject -p igmp -j DROP  -A reject -p tcp -j REJECT --reject-with tcp-reset  -A reject -p udp -j REJECT --reject-with icmp-port-unreachable  -A reject -p icmp -j REJECT --reject-with icmp-host-unreachable  -A reject -j REJECT --reject-with icmp-host-prohibited  -A shorewall -m recent --set --name %CURRENTTIME --mask 255.255.255.255 --rsource  -A smurflog -m hashlimit --hashlimit-upto 1/sec --hashlimit-burst 10 --hashlimit-mode srcip --hashlimit-name lograte -j LOG --log-prefix "smurfs DROP " --log-level 6  -A smurflog -j DROP  -A smurfs -s 0.0.0.0/32 -j RETURN  -A smurfs -m addrtype --src-type BROADCAST -g smurflog  -A smurfs -s 224.0.0.0/4 -g smurflog  -A tcpflags -p tcp -m tcp --tcp-flags FIN,SYN,RST,PSH,ACK,URG FIN,PSH,URG -g logflags  -A tcpflags -p tcp -m tcp --tcp-flags FIN,SYN,RST,PSH,ACK,URG NONE -g logflags  -A tcpflags -p tcp -m tcp --tcp-flags SYN,RST SYN,RST -g logflags  -A tcpflags -p tcp -m tcp --tcp-flags FIN,RST FIN,RST -g logflags  -A tcpflags -p tcp -m tcp --tcp-flags FIN,SYN FIN,SYN -g logflags  -A tcpflags -p tcp -m tcp --tcp-flags FIN,PSH,ACK FIN,PSH -g logflags  -A tcpflags -p tcp -m tcp --sport 0 --tcp-flags FIN,SYN,RST,ACK SYN -g logflags  

Thank you

Mac EC2 instance launch failure - The requested configuration is currently not supported. Please check the documentation for supported configurations

Posted: 30 Mar 2021 08:38 PM PDT

I'm trying to create a Mac EC2 instance by following this AWS User Guide however it fails with the error seen in the screenshot below:

EC2 Mac Error

What I have tried so far :

  • Using other AWS regions such as Ireland, Oregon with a combination of different availability zones but still received the same error message.

  • Launching a Mac EC2 instance using the AWS CLI command below:

    aws ec2 allocate-hosts --region us-east-1 --instance-type mac1.metal --availability-zone us-east-1c --quantity 1 --profile syumaK  

Result of the above command failed with the below message :

An error occurred (UnsupportedHostConfiguration) when calling the AllocateHosts operation: The requested configuration is currently not supported. Please check the documentation for supported configurations.  

How to get the azure VMUUID of azure VM?

Posted: 30 Mar 2021 04:19 PM PDT

I am creating an ARM template for the Azure Log Analytics workspace.it has some queries which use azure VM's VMUUID.Is there is any way to fetch the azure VM's VMUUID in ARM template or any other ways to fetch azure VMUUID?

Incoming external email appeared in Postfix log, but not in mailbox

Posted: 30 Mar 2021 06:48 PM PDT

I setup Postfix + Dovecot (with IMAP) for my Ubuntu 16.04 email server. I sent an email from Gmail to my domain, I can see the postfix log "reacts" with the incoming email, but the incoming email does not appear in the Mailbox.

Here are the related part of the logs:

Mar 30 02:15:17 yourappapp-vm postfix/smtp[27680]: Untrusted TLS connection established to gmail-smtp-in.l.google.com[74.125.24.27]:25: TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits)  Mar 30 02:15:18 yourappapp-vm postfix/smtp[27680]: C5EBB1BCB3B: to=<example@gmail.com>, orig_to=<info@yourappapp.com>, relay=gmail-smtp-in.l.google.com[74.125.24.27]:25, delay=1.3, delays=0/0.01/0.7/0.63, dsn=2.0.0, status=sent (250 2.0.0 OK  1617070518 y128si20297886pfb.155 - gsmtp)  Mar 30 02:15:18 yourappapp-vm postfix/qmgr[27605]: C5EBB1BCB3B: removed  

Looks like it is an outgoing email, but I didn't send anything at that time. I only received this when I send an email from Gmail. How can I further debug?

p.s. for the untrusted TLS issue, I fixed it by adding CApath to Postfix config. Still no email in mailbox.

in my ~/Maildir/ sub-folders, all are empty.

p.s. the mail server can send out emails, therefore the SMTP part is fine.


More debugging info below:

Network Ports

Postfix is listening to 25, 587 and 465

$ ss -lnpt | grep master  LISTEN     0      100          *:25                       *:*                   users:(("master",pid=27603,fd=12))  LISTEN     0      100          *:587                      *:*                   users:(("master",pid=27603,fd=16))  LISTEN     0      100          *:465                      *:*                   users:(("master",pid=27603,fd=19))  

Dovecot is listening to 143 and 993

$ ss -lnpt | grep dovecot  LISTEN     0      100          *:993                      *:*                   users:(("dovecot",pid=27649,fd=36))  LISTEN     0      100          *:143                      *:*                   users:(("dovecot",pid=27649,fd=35))  

main.cf

# See /usr/share/postfix/main.cf.dist for a commented, more complete version      # Debian specific:  Specifying a file name will cause the first  # line of that file to be used as the name.  The Debian default  # is /etc/mailname.  #myorigin = /etc/mailname    smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu)  biff = no    # appending .domain is the MUA's job.  append_dot_mydomain = no    # Uncomment the next line to generate "delayed mail" warnings  #delay_warning_time = 4h    readme_directory = no    # TLS parameters  #smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem  #smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key  #Enable TLS Encryption when Postfix receives incoming emails  smtpd_tls_cert_file=/etc/letsencrypt/live/mail.yourappapp.com/fullchain.pem  smtpd_tls_key_file=/etc/letsencrypt/live/mail.yourappapp.com/privkey.pem  smtpd_tls_security_level=may   smtpd_tls_loglevel = 1  smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache    #Enable TLS Encryption when Postfix sends outgoing emails  smtp_tls_security_level = may  smtp_tls_loglevel = 1  smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache    #Enforce TLSv1.3 or TLSv1.2  smtpd_tls_mandatory_protocols = !SSLv2, !SSLv3, !TLSv1, !TLSv1.1  smtpd_tls_protocols = !SSLv2, !SSLv3, !TLSv1, !TLSv1.1  smtp_tls_mandatory_protocols = !SSLv2, !SSLv3, !TLSv1, !TLSv1.1  smtp_tls_protocols = !SSLv2, !SSLv3, !TLSv1, !TLSv1.1      smtpd_use_tls=yes  smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache  smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache    smtp_tls_CApath = /etc/ssl/certs  smtpd_tls_CApath = /etc/ssl/certs    # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for  # information on enabling SSL in the smtp client.    smtpd_relay_restrictions = permit_mynetworks permit_sasl_authenticated defer_unauth_destination  myhostname = mail.yourappapp.com  alias_maps = hash:/etc/aliases  alias_database = hash:/etc/aliases  myorigin = /etc/mailname  mydestination = yourappapp.com, $myhostname, localhost.$mydomain, localhost  relayhost =   mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128  mailbox_size_limit = 0  recipient_delimiter = +  inet_interfaces = all  inet_protocols = ipv4  home_mailbox = Maildir/  # LMTP  mailbox_transport = lmtp:unix:private/dovecot-lmtp  smtputf8_enable = no  mydomain = yourappapp.com  

master.cf

#  # Postfix master process configuration file.  For details on the format  # of the file, see the master(5) manual page (command: "man 5 master" or  # on-line: http://www.postfix.org/master.5.html).  #  # Do not forget to execute "postfix reload" after editing this file.  #  # ==========================================================================  # service type  private unpriv  chroot  wakeup  maxproc command + args  #               (yes)   (yes)   (no)    (never) (100)  # ==========================================================================  smtp      inet  n       -       y       -       -       smtpd  #smtp      inet  n       -       y       -       1       postscreen  #smtpd     pass  -       -       y       -       -       smtpd  #dnsblog   unix  -       -       y       -       0       dnsblog  #tlsproxy  unix  -       -       y       -       0       tlsproxy  submission inet n       -       y       -       -       smtpd    -o syslog_name=postfix/submission    -o smtpd_tls_security_level=encrypt    -o smtpd_tls_wrappermode=no    -o smtpd_sasl_auth_enable=yes  #  -o smtpd_reject_unlisted_recipient=no  #  -o smtpd_client_restrictions=$mua_client_restrictions  #  -o smtpd_helo_restrictions=$mua_helo_restrictions  #  -o smtpd_sender_restrictions=$mua_sender_restrictions    -o smtpd_recipient_restrictions=permit_mynetworks,permit_sasl_authenticated,reject    -o smtpd_relay_restrictions=permit_sasl_authenticated,reject    -o smtpd_sasl_type=dovecot    -o smtpd_sasl_path=private/auth  #  -o milter_macro_daemon_name=ORIGINATING  smtps     inet  n       -       y       -       -       smtpd    -o syslog_name=postfix/smtps    -o smtpd_tls_wrappermode=yes    -o smtpd_sasl_auth_enable=yes  #  -o smtpd_reject_unlisted_recipient=no  #  -o smtpd_client_restrictions=$mua_client_restrictions  #  -o smtpd_helo_restrictions=$mua_helo_restrictions  #  -o smtpd_sender_restrictions=$mua_sender_restrictions    -o smtpd_recipient_restrictions=permit_mynetworks,permit_sasl_authenticated,reject    -o smtpd_relay_restrictions=permit_sasl_authenticated,reject    -o smtpd_sasl_type=dovecot    -o smtpd_sasl_path=private/auth  #  -o milter_macro_daemon_name=ORIGINATING  #628       inet  n       -       y       -       -       qmqpd  pickup    unix  n       -       y       60      1       pickup  cleanup   unix  n       -       y       -       0       cleanup  qmgr      unix  n       -       n       300     1       qmgr  #qmgr     unix  n       -       n       300     1       oqmgr  tlsmgr    unix  -       -       y       1000?   1       tlsmgr  rewrite   unix  -       -       y       -       -       trivial-rewrite  bounce    unix  -       -       y       -       0       bounce  defer     unix  -       -       y       -       0       bounce  trace     unix  -       -       y       -       0       bounce  verify    unix  -       -       y       -       1       verify  flush     unix  n       -       y       1000?   0       flush  proxymap  unix  -       -       n       -       -       proxymap  proxywrite unix -       -       n       -       1       proxymap  smtp      unix  -       -       y       -       -       smtp  relay     unix  -       -       y       -       -       smtp  #       -o smtp_helo_timeout=5 -o smtp_connect_timeout=5  showq     unix  n       -       y       -       -       showq  error     unix  -       -       y       -       -       error  retry     unix  -       -       y       -       -       error  discard   unix  -       -       y       -       -       discard  local     unix  -       n       n       -       -       local  virtual   unix  -       n       n       -       -       virtual  lmtp      unix  -       -       y       -       -       lmtp  anvil     unix  -       -       y       -       1       anvil  scache    unix  -       -       y       -       1       scache  #  # ====================================================================  # Interfaces to non-Postfix software. Be sure to examine the manual  # pages of the non-Postfix software to find out what options it wants.  #  # Many of the following services use the Postfix pipe(8) delivery  # agent.  See the pipe(8) man page for information about ${recipient}  # and other message envelope options.  # ====================================================================  #  # maildrop. See the Postfix MAILDROP_README file for details.  # Also specify in main.cf: maildrop_destination_recipient_limit=1  #  maildrop  unix  -       n       n       -       -       pipe    flags=DRhu user=vmail argv=/usr/bin/maildrop -d ${recipient}  #  # ====================================================================  #  # Recent Cyrus versions can use the existing "lmtp" master.cf entry.  #  # Specify in cyrus.conf:  #   lmtp    cmd="lmtpd -a" listen="localhost:lmtp" proto=tcp4  #  # Specify in main.cf one or more of the following:  #  mailbox_transport = lmtp:inet:localhost  #  virtual_transport = lmtp:inet:localhost  #  # ====================================================================  #  # Cyrus 2.1.5 (Amos Gouaux)  # Also specify in main.cf: cyrus_destination_recipient_limit=1  #  #cyrus     unix  -       n       n       -       -       pipe  #  user=cyrus argv=/cyrus/bin/deliver -e -r ${sender} -m ${extension} ${user}  #  # ====================================================================  # Old example of delivery via Cyrus.  #  #old-cyrus unix  -       n       n       -       -       pipe  #  flags=R user=cyrus argv=/cyrus/bin/deliver -e -m ${extension} ${user}  #  # ====================================================================  #  # See the Postfix UUCP_README file for configuration details.  #  uucp      unix  -       n       n       -       -       pipe    flags=Fqhu user=uucp argv=uux -r -n -z -a$sender - $nexthop!rmail ($recipient)  #  # Other external delivery methods.  #  ifmail    unix  -       n       n       -       -       pipe    flags=F user=ftn argv=/usr/lib/ifmail/ifmail -r $nexthop ($recipient)  bsmtp     unix  -       n       n       -       -       pipe    flags=Fq. user=bsmtp argv=/usr/lib/bsmtp/bsmtp -t$nexthop -f$sender $recipient  scalemail-backend unix  -   n   n   -   2   pipe    flags=R user=scalemail argv=/usr/lib/scalemail/bin/scalemail-store ${nexthop} ${user} ${extension}  mailman   unix  -       n       n       -       -       pipe    flags=FR user=list argv=/usr/lib/mailman/bin/postfix-to-mailman.py    ${nexthop} ${user}  

SMTP session looks fine as well:

220 mail.yourappapp.com ESMTP Postfix (Ubuntu)    250 mail.yourappapp.com    EHLO test.network-tools.com  250-mail.yourappapp.com  250-PIPELINING  250-SIZE 10240000  250-VRFY  250-ETRN  250-STARTTLS  250-ENHANCEDSTATUSCODES  250-8BITMIME  250 DSN    VRFY info  252 2.0.0 info    RSET  250 2.0.0 Ok    EXPN info  502 5.5.2 Error: command not recognized    RSET  250 2.0.0 Ok    MAIL FROM:<admin@Network-Tools.com>  250 2.1.0 Ok    RCPT TO:<info@yourappapp.com>  250 2.1.5 Ok    RSET  250 2.0.0 Ok    QUIT  

Added debug information based on the answer:

Telnet session log

$ telnet localhost 25  Trying 127.0.0.1...  Connected to localhost.  Escape character is '^]'.  220 mail.yourappapp.com ESMTP Postfix (Ubuntu)  HELO localhost  250 mail.yourappapp.com  MAIL FROM: <example@gmail.com>  250 2.1.0 Ok  RCPT TO: <info@yourappapp.com>  250 2.1.5 Ok  DATA  354 End data with <CR><LF>.<CR><LF>  From: <example@gmail.com>  To: <info@yourappapp.com>  Subject: Test from Telnet    This is a test    .  250 2.0.0 Ok: queued as BB9021B9B62    QUIT  221 2.0.0 Bye  Connection closed by foreign host.  

Related Mail Log

Mar 31 01:43:27 yourappapp-vm postfix/smtpd[8491]: connect from localhost[127.0.0.1]  Mar 31 01:43:54 yourappapp-vm postfix/smtpd[8491]: BB9021B9B62: client=localhost[127.0.0.1]  Mar 31 01:44:31 yourappapp-vm postfix/cleanup[8494]: BB9021B9B62: message-id=<20210331014354.BB9021B9B62@mail.yourappapp.com>  Mar 31 01:44:31 yourappapp-vm postfix/qmgr[32352]: BB9021B9B62: from=<example@gmail.com>, size=391, nrcpt=1 (queue active)  Mar 31 01:44:31 yourappapp-vm postfix/cleanup[8494]: 9636F1B9B63: message-id=<20210331014354.BB9021B9B62@mail.yourappapp.com>  Mar 31 01:44:31 yourappapp-vm postfix/qmgr[32352]: 9636F1B9B63: from=<raptor.shivan@gmail.com>, size=526, nrcpt=1 (queue active)  Mar 31 01:44:31 yourappapp-vm postfix/local[8498]: BB9021B9B62: to=<info@yourappapp.com>, relay=local, delay=45, delays=45/0/0/0, dsn=2.0.0, status=sent (forwarded as 9636F1B9B63)  Mar 31 01:44:31 yourappapp-vm postfix/qmgr[32352]: BB9021B9B62: removed  Mar 31 01:44:32 yourappapp-vm postfix/smtp[8499]: Trusted TLS connection established to gmail-smtp-in.l.google.com[172.217.194.27]:25: TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits)  Mar 31 01:44:33 yourappapp-vm postfix/smtp[8499]: 9636F1B9B63: to=<example@gmail.com>, orig_to=<info@yourappapp.com>, relay=gmail-smtp-in.l.google.com[172.217.194.27]:25, delay=1.4, delays=0/0.01/0.69/0.71, dsn=2.0.0, status=sent (250 2.0.0 OK  1617155072 u21si587511pfl.125 - gsmtp)  Mar 31 01:44:33 yourappapp-vm postfix/qmgr[32352]: 9636F1B9B63: removed  Mar 31 01:44:35 yourappapp-vm postfix/smtpd[8491]: disconnect from localhost[127.0.0.1] helo=1 mail=1 rcpt=1 data=1 quit=1 commands=5  

Both logs look NORMAL, but no email has been received in the mailbox.

NGINX Reverse Proxy What happens if someone browses to an IP instead of a URL

Posted: 30 Mar 2021 05:02 PM PDT

I have a nginx reverse proxy that acts as a one to many (single public IP) proxy for three other web servers.

I have all the blocks set up to redirect to each server depending on what URL is provided by the client.

What happens if the client simply puts the reverse proxy's IP address in their browser instead of an URL? How does nginx determine where to send the traffic to?

I just tried it and it seems to send the traffic to the last server that it forwarded traffic to?

How do I drop/deny traffic that does not match one of the three server blocks in my configuration (i.e. traffic that uses an IP instead of URL)?

Update: For my configuration, here is the only conf file in sites-enabled:

######## Server1 ########  server {          if ($host = server1.domain.com) {                  return 301 https://$host$request_uri;          }            listen 80;          server_name server1.domain.com;          return 404;  }  server {          listen 443 ssl; # managed by Certbot                  ssl_certificate /etc/letsencrypt/live/server1.domain.com/fullchain.pem;       # managed by Certbot                  ssl_certificate_key /etc/letsencrypt/live/server1.domain.com/privkey.pem;     # managed by Certbot                  include /etc/letsencrypt/options-ssl-nginx.conf;                                # managed by Certbot                  ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;                                  # managed by Certbot            server_name server1.domain.com;            location / {                  proxy_set_header Host $host;                  proxy_set_header X-Real-IP $remote_addr;                  proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;                  proxy_redirect off;                  proxy_pass_request_headers on;                  proxy_set_header X-Forwarded-Proto $scheme;                  proxy_pass https://192.168.20.2:443;          }            location ^~ /wp-login.php {                  satisfy any;                  allow 172.20.5.2;                  deny all;                    proxy_pass https://192.168.20.2:443;          }  }  ######## Server2 ########  server {          if ($host = server2.domain.com) {                  return 301 https://$host$request_uri;          }            listen 80;          server_name server2.domain.com;          return 404;  }    server {          listen 443 ssl http2;                  ssl_certificate /etc/letsencrypt/live/server2.domain.com/fullchain.pem; # managed by Certbot                  ssl_certificate_key /etc/letsencrypt/live/server2.domain.com/privkey.pem; # managed by Certbot                  include /etc/letsencrypt/options-ssl-nginx.conf;                  ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;            server_name server2.domain.com;            location / {                  proxy_set_header Host $host;                  proxy_set_header X-Real-IP $remote_addr;                  proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;                  proxy_redirect off;                  proxy_pass_request_headers on;                  proxy_set_header X-Forwarded-Proto $scheme;                  proxy_pass https://192.168.20.3:443;          }    }    ######## Server3 ########  server {          if ($host = server3.domain.com) {                  return 301 https://$host$request_uri;          }            listen 80;          server_name server3.domain.com;          return 404;  }  server {          listen 443 ssl http2;                  ssl_certificate /etc/letsencrypt/live/server3.domain.com/fullchain.pem; # managed by Certbot                  ssl_certificate_key /etc/letsencrypt/live/server3.domain.com/privkey.pem; # managed by Certbot                  include /etc/letsencrypt/options-ssl-nginx.conf;                   ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;             server_name server3.domain.com;            location / {                  proxy_set_header Host $host;                  proxy_set_header X-Real-IP $remote_addr;                  proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;                  proxy_redirect off;                  proxy_pass_request_headers on;                  proxy_set_header X-Forwarded-Proto $scheme;                  proxy_pass https://192.168.20.4:443;          }  }  

Nginx reverse proxy IP is 192.168.20.6

So what I am seeing is if I put in just the IP into my browser, NGINX appears to go to the first server block in my conf file, which tracks with this link: https://nginx.org/en/docs/http/request_processing.html

And it does try and load server1 in my case, but since the serving of website content is based upon the URL, it sorta breaks some features of my three web servers.

Looking at that link above, I see that I can employ a block like this at the beginning to block IP only requests?

server {      listen      80;      listen      443;      server_name "";      return      444;  }  

Can I run a single-node K3S cluster?

Posted: 30 Mar 2021 06:06 PM PDT

I am aware of the consequences and issues with running a single-node cluster. However, I'm still curious if it's possible. I plan on setting everything up myself.

In other words, can I run the control plane and a worker node on the same physical machine.

In Powershell, how to check drive space on remote server with non-admin account?

Posted: 30 Mar 2021 04:23 PM PDT

I work in an active directory environment with many servers. I have a user account that I would like to use to check the status of other servers, without giving this account full administrative access to these other servers. Specifically, I want to check the drive space on these servers, and I'd like to do it with Powershell.

I have executed Enable-PSRemoting on the target server, and I can successfully invoke Get-PSDrive on them remotely using administrator credentials. The results are returned almost immediately, and include information about the used / free space on all drives.

However, when I run the same command (Invoke-Command -computer remoteserver1 {Get-PSDrive -PSProvider FileSystem}) as the non-administrative user, the results come back very slowly (takes about 30 seconds), and none of the drives have any information about their used / free space.

What I have done so far:

  • I have added the non-administrative user account to the Remote Management Users group on the target server.
  • Edited SDDL for scmanager (on the target server) to add the same DiscretionaryACL for Remote Management Users as Built-in Administrators have.
  • Per this post, I have granted this user WMI permissions in wmimgmt.exe > WIM Control (Local) > (right click) > Properties > Security tab > Expand 'Root' > click on SECURITY node > click 'Security' button > added non-admin user with full access.
  • Added user to the Distributed COM Users group on the target server.

Some also suggested trying Invoke-Command -computer remoteserver1 {Get-WmiObject -Class Win32_MappedLogicalDisk} to troubleshoot, but it comes back 'Access is denied.' I believe if I could get Get-WmiObject working successfully for this limited user, it would solve my issue.

What should I do to get this limited user account the access they need to check drive space on other servers? without giving the account admin rights, and preferably without having to map and unmap any drives?

OpenVPN client timeout when browsing https sites on Windows 7

Posted: 30 Mar 2021 10:03 PM PDT

Client config

client  dev tun  proto tcp  remote -.-.-.- 443  resolv-retry infinite  nobind  persist-key  persist-tun  remote-cert-tls server  cipher AES-128-CBC  auth SHA256  key-direction 1  comp-lzo  verb 3  

Server config (on Ubuntu 16)

port 443  proto tcp  cert server.crt  key server.key  # This file should be kept secret  dh dh2048.pem  server 10.77.77.0 255.255.255.0  ifconfig-pool-persist ipp.txt  push "redirect-gateway def1 bypass-dhcp"  push "dhcp-option DNS 208.67.222.222"  push "dhcp-option DNS 8.8.8.8"  client-to-client  duplicate-cn  keepalive 10 120  tls-auth ta.key 0 # This file is secret  key-direction 0  cipher AES-128-CBC   # AES  auth SHA256  comp-lzo  user nobody  group nogroup  persist-key  persist-tun  status openvpn-status.log  verb 3  

The issue only occurs on all my Windows machine, when i try opening any secure sites like https://google.com but when i open it from my iPhone or Ubuntu while being connected to the OpenVPN the connection does not timeout at all.

EDIT: Example if i ping google.com when browsing a non secure site (http) it's stable and fine but the moment i open a secure site (https) the ping shoots up very high to about 2k ms and then timeouts for about 1-2mins before returning back to normal. eventually the page gets loaded but only after that long delay.

I'm using OpenVPN TAP Adapter for Windows, could it be the adapter problem or perhaps the cipher?

Azure - Connecting to Blob storage from app service through vnet

Posted: 30 Mar 2021 05:03 PM PDT

I've set up a VNET with a point-to-site GW and two other subnets.

  • VNET
    • StorageSubnet (with service endpoint to storage)
    • GWSubnet (with service endpoint to storage)
    • noStorage

I've connected my web app to the VNET, but I get an exception when trying to list the blobs [1]. If I make the storage account publicly available everyting works as excpected.

To figure out where it fails I set up two small VMs on StorageSubnet and noStorage respectively. As excpected one works with Azure CLI listing blobs and the other fails. This way I was also able to view Effective Routes where the service endpoint appears.

Is there a way to view the Effective Routes on an app services instance? (my webapp)

The app service (my webapp) connects to the VNET not a subnet, is there something I'm missing, some manual routing needed? I'd excpect this to route just like my VM test.

Is there a way for me to run Azure CLI on the app service, or some other next step in debugging?

[1]

Microsoft.WindowsAzure.Storage.StorageException  at Microsoft.WindowsAzure.Storage.Core.Executor.Executor.    <ExecuteAsyncInternal>d__4`1.MoveNext()  --- End of stack trace from previous location where exception was thrown     ---  at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()  at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)  at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)  at Microsoft.WindowsAzure.Storage.Blob.CloudBlobContainer.   <ListBlobsSegmentedAsync>d__61.MoveNext()  --- End of stack trace from previous location where exception was thrown ---  at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()  at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)  at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)  at System.Runtime.CompilerServices.TaskAwaiter`1.GetResult()  

Migrating Jenkins jobs from one server to another

Posted: 30 Mar 2021 07:00 PM PDT

Copied Jenkins "jobs" directory from one A (VB) to server B (AWS). The jobs directory shows up in the server B with all the files in it. But those jobs doesn't populate in Jenkins UI. Please help.

Thank you!

Puppet certificate verification failed even after certificate regeneration

Posted: 30 Mar 2021 09:04 PM PDT

Summary

I had to rebuild a server, and run into an interesting issue. Puppet refuses to verify the certificates, even after removing /var/lib/puppet/ssl and cleaning the certificate off of the master.

Servers

Master:
OS: Ubuntu 14.04
Puppet Master version: 3.4.3-1 (using Webrick)

Agent:
OS: Ubuntu 12.04
Puppet version: 2.7.11
(Note that I replaced the hostname with 'agent-server.com' in the below output)

Replication Steps

  • Remove SSL dir on agent-server:

    rm -fr /var/lib/puppet/ssl

  • Clean the certificate on the Puppet Master:

    puppet cert clean agent-server.com

  • Restart Puppet Master:

    /etc/init.d/puppetmaster restart

  • Run puppet agent:

    puppet agent -t

Error messages:

Agent:

root@agent-server:~# puppet agent -t  info: Creating a new SSL key for agent-server.com  info: Caching certificate for ca  info: Creating a new SSL certificate request for agent-server.com  info: Certificate Request fingerprint (md5): F2:2A:AD:3C:D5:E8:13:82:1D:C5:80:B4:FD:23:C4:86  info: Caching certificate for agent-server.com  info: Caching certificate_revocation_list for ca  err: Could not retrieve catalog from remote server: SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed.  This is often because the time is out of sync on the server or client  warning: Not using cache on failed catalog  err: Could not retrieve catalog; skipping run  err: Could not send report: SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed.  This is often because the time is out of sync on the server or client   

Puppet Master

[2017-08-29 18:08:51] 10.88.0.208 - - [29/Aug/2017:18:08:51 UTC] "GET /production/certificate/ca? HTTP/1.1" 200 1939  [2017-08-29 18:08:51] - -> /production/certificate/ca?  [2017-08-29 18:08:51] 10.88.0.208 - - [29/Aug/2017:18:08:51 UTC] "GET /production/certificate/agent-server.com? HTTP/1.1" 404 78  [2017-08-29 18:08:51] - -> /production/certificate/agent-server.com?  [2017-08-29 18:08:51] 10.88.0.208 - - [29/Aug/2017:18:08:51 UTC] "GET /production/certificate_request/agent-server.com? HTTP/1.1" 404 86  [2017-08-29 18:08:51] - -> /production/certificate_request/agent-server.com?  [2017-08-29 18:08:51] 10.88.0.208 - - [29/Aug/2017:18:08:51 UTC] "PUT /production/certificate_request/agent-server.com HTTP/1.1" 200 1448  [2017-08-29 18:08:51] - -> /production/certificate_request/agent-server.com  [2017-08-29 18:08:51] 10.88.0.208 - - [29/Aug/2017:18:08:51 UTC] "GET /production/certificate/agent-server.com? HTTP/1.1" 200 1448  [2017-08-29 18:08:51] - -> /production/certificate/agent-server.com?  [2017-08-29 18:08:56] 10.88.0.208 - - [29/Aug/2017:18:08:56 UTC] "GET /production/certificate_revocation_list/ca? HTTP/1.1" 200 11220  [2017-08-29 18:08:56] - -> /production/certificate_revocation_list/ca?  [2017-08-29 18:08:56] ERROR OpenSSL::SSL::SSLError: SSL_accept returned=1 errno=0 state=SSLv3 read client certificate A: sslv3 alert certificate revoked  /usr/lib/ruby/vendor_ruby/puppet/network/http/webrick.rb:35:in `accept'  /usr/lib/ruby/vendor_ruby/puppet/network/http/webrick.rb:35:in `block (2 levels) in listen'  /usr/lib/ruby/1.9.1/webrick/server.rb:191:in `call'  /usr/lib/ruby/1.9.1/webrick/server.rb:191:in `block in start_thread'  [2017-08-29 18:08:56] ERROR OpenSSL::SSL::SSLError: SSL_accept returned=1 errno=0 state=SSLv3 read client certificate A: sslv3 alert certificate revoked  /usr/lib/ruby/vendor_ruby/puppet/network/http/webrick.rb:35:in `accept'  /usr/lib/ruby/vendor_ruby/puppet/network/http/webrick.rb:35:in `block (2 levels) in listen'  /usr/lib/ruby/1.9.1/webrick/server.rb:191:in `call'  /usr/lib/ruby/1.9.1/webrick/server.rb:191:in `block in start_thread  

So judging by the logs it looks like the certificate is being revoked, even though it is a brand new one. In addition, it can't be a time issue because the two servers are very close, only apart by 2-3 seconds.

I'm rather stumped, unfortunately. Any help is appreciated.

Web Farms in Windows Server 2016 IIS with Scale-Out File Servers (SOFS)?

Posted: 30 Mar 2021 08:07 PM PDT

It's 2017 and I'm looking for some "best practices" guidance with IIS Web Farms across multiple VM servers. Pre-Server 2016 recommendations like

  • Distributed File System-Replication (DFS-R)
  • Application Request Routing (ARR) and Network Load Balancing (NLB)

might not make sense anymore given Server 2016, Clustered Shared Volumes (CSV). Azure IAAS platform also adds their own Load Balancer capability.

TechNet teases at SOFS as a "recommended" solution in the "Practical Applications" section, yet only one bullet point. https://technet.microsoft.com/en-us/library/hh831349(v=ws.11).aspx

"The Internet Information Services (IIS) Web server can store configuration and data for Web sites on a scale-out file share."

What's the latest "best practice", given Server 2016, for both on-premise and Azure IaaS (Not PaaS!)? Is SOFS (active-active) up to the task for IIS Web Farms?

How to route all traffic from VPN client through LAN?

Posted: 30 Mar 2021 06:35 PM PDT

I have a VPN server (ubuntu) with 2 interface:

eth0: A.B.C.D  eth1: 192.168.8.45  

I've setup VPN which create tun interface. VPN IP is 10.8.0.0/24 Now from VPN client (10.8.0.6) I can connect to my LAN (192.168.8.0/24), ping and access server on LAN. I've forced all traffic through LAN by setting in server.conf

push "redirect-gateway def1"  push "dhcp-option DNS 8.8.8.8"  push "dhcp-option DNS 8.8.4.4"  

And set IP tables

iptables -t nat -A POSTROUTING -s 10.8.0.0/24 -o eth0 -j MASQUERADE  

With these setting, I can connect to the internet, but using IP address of eth0 (A.B.C.D), not through eth1 as I want.

If I try to set:

iptables -t nat -A POSTROUTING -s 10.8.0.0/24 -o eth1 -j MASQUERADE  

Then I cannot connect to the internet.

So what is the problem? I've already googled but no answer in this case. Could you help me? I will really appreciate. Thank you

how to whitelist IPs on google compute engine?

Posted: 30 Mar 2021 08:07 PM PDT

One of the ISPs is unable to access my website hosted on google compute engine, how to whitelist their IP blocks? I dont see any IP blocked in iptables of individual linux machines running behind the load balancer.

Update 1: The firewall rule which allows incoming traffic from any source(Allow from any source (0.0.0.0/0) - tcp 80) is already added to these gce machines.

Update 2: Further analysis has resulted in the following observations (We took the client on a remote call):

  1. http://mywebsite.com is not loading while https://mywebsite.com is loading. This is witnessed only by this user, other users are able to access the site on both http and https.
  2. Contacted the ISP provider and confirmed that there are other machines from that ISP which are facing the same issue. This means its not a client specific issue(browser, firewall, virus issue).
  3. ping mywebsite.com is successful on the client machine.
  4. tracert mywebsite.com results in reaching google's ISP without any packet loss, this is followed by a few request timeouts before it hits the IP of mywebsite. Sometime it never reaches the IP of mywebsite.com, all results after entering google ISP results in request time outs.

Traceroute example from the client's system is as follows:

1 <1 ms <1 ms <1 ms 192.168.0.1 2 1 ms 1 ms 1 ms 192.168.10.1 3 34 ms 9 ms 13 ms 103.194.232.3.rev.jetspotnetworks.in [103.194.232.3] 4 102 ms 10 ms 13 ms 103.194.232.1.rev.jetspotnetworks.in [103.194.232.1] 5 11 ms 9 ms 13 ms 115.42.32.65.rev.jetspotnetworks.in [115.42.32.65] 6 86 ms 87 ms 87 ms 72.14.218.21 -----------> Google ISP 7 89 ms 85 ms 85 ms 209.85.142.228 -----------> Google ISP 8 121 ms 122 ms 121 ms 66.249.94.39 9 145 ms 149 ms 144 ms 216.239.63.213 10 207 ms 208 ms 212 ms 216.239.62.201 11 200 ms 197 ms 197 ms 66.249.94.131 12 * * * Request timed out. 13 * * * Request timed out. 14 * * * Request timed out. 15 * * * Request timed out. 16 * * * Request timed out. 17 * * * Request timed out. 18 * * * Request timed out. 19 * * * Request timed out. 20 243 ms 191 ms 191 ms IP of mywebsite.com -----------> Sometimes this is never witnessed

Apache 2.4 timeout not working for idle connections?

Posted: 30 Mar 2021 09:04 PM PDT

I have to lower the Apache Timeout value. I tried to make it 3 or 5. I opened a telnet connection to my Apache and left it with no data, it's always ~30sec + Apache Timeout.

I tested this configuration in Apache 2.2 and it works right (3 sec and the connection is closed automatically). I tried to make an Apache 2.4 fresh install in a VPS, with no CGI and the less enabled modules I can with no success.

These are the modules I have: authz_core.load authz_user.load headers.load mpm_prefork.conf mpm_prefork.load

But, if you open the connection and send something (like the first line of the request) it runs fine and then Timeout apply successfully

Any Ideas? Thanks

Laravel in subdirectory - Nginx and fastcgi config

Posted: 30 Mar 2021 07:00 PM PDT

I'm trying to set up various facilities laravel applications on my nginx server hhvm, but after much research and many attempts could not succeed.

Could someone help me with this? Here is my current setup:

server {          listen      80;          server_name  178.13.1.230;          root    /home/callcenter/public_html/gateway;            location /  {              index   index.html index.php;          }            location /crm    {              root   /home/callcenter/public_html/gateway/crm/public;              #rewrite ^/crm/(.*)$  /$1  break;              index   index.php   index.html;              try_files   $uri  $uri/ /index.php$is_args$args /index.php?$query_string;          }            location ~ /crm/.+\.php$ {              root   /home/callcenter/public_html/gateway/crm/public;              #rewrite ^/crm/(.*)$  /$1  break;              include /etc/nginx/fastcgi.conf;              fastcgi_param    SCRIPT_FILENAME    $document_root$fastcgi_script_name;                fastcgi_index  index.php;              include /etc/nginx/hhvm;              #fastcgi_pass unix:/var/run/hhvm/hhvm.sock;            }        }  

EDIT

Excuse my lack of attention, really did not let explained the problem right.

The server is reacting as if I was trying to make a "direct download" a file when I access 178.13.1.230/crm

What I need is that I can set up multiple laravel applications on the same server, accessing them in urls like:

http://178.13.1.230/app1

http://178.13.1.230/app2

The fastcgi line is commented, is being replaced by the hhvm.conf include.

Thank you for your help!

Below is a copy of the files that are on include:

/etc/nginx/hhvm.conf

location ~ \.(hh|php)$ {      fastcgi_keep_conn on;      fastcgi_pass unix:/var/run/hhvm/hhvm.sock;      fastcgi_index  index.php;      fastcgi_param  SCRIPT_FILENAME $document_root$fastcgi_script_name;      include        fastcgi_params;  }      location ~ /crm/.+\.(hh|php)$ {      fastcgi_keep_conn on;      fastcgi_pass unix:/var/run/hhvm/hhvm.sock;      fastcgi_index  index.php;      fastcgi_param  SCRIPT_FILENAME $document_root$fastcgi_script_name;      include        fastcgi_params;  }  

/etc/nginx/fastcgi.conf

fastcgi_param  SCRIPT_FILENAME    $document_root$fastcgi_script_name;  fastcgi_param  QUERY_STRING       $query_string;  fastcgi_param  REQUEST_METHOD     $request_method;  fastcgi_param  CONTENT_TYPE       $content_type;  fastcgi_param  CONTENT_LENGTH     $content_length;    fastcgi_param  SCRIPT_NAME        $fastcgi_script_name;  fastcgi_param  REQUEST_URI        $request_uri;  fastcgi_param  DOCUMENT_URI       $document_uri;  fastcgi_param  DOCUMENT_ROOT      $document_root;  fastcgi_param  SERVER_PROTOCOL    $server_protocol;  fastcgi_param  HTTPS              $https if_not_empty;    fastcgi_param  GATEWAY_INTERFACE  CGI/1.1;  fastcgi_param  SERVER_SOFTWARE    nginx/$nginx_version;    fastcgi_param  REMOTE_ADDR        $remote_addr;  fastcgi_param  REMOTE_PORT        $remote_port;  fastcgi_param  SERVER_ADDR        $server_addr;  fastcgi_param  SERVER_PORT        $server_port;  fastcgi_param  SERVER_NAME        $server_name;    # PHP only, required if PHP was built with --enable-force-cgi-redirect  fastcgi_param  REDIRECT_STATUS    200;  

ssh authentication doesnt allow public key

Posted: 30 Mar 2021 05:03 PM PDT

I'm having a problem with the authentication of SSH on one of our test servers. I have configured the sshd_config correctly yet the Authentication method shows only password auth.

[root@[client] ~]# ssh -v [server]    OpenSSH_5.3p1, OpenSSL 1.0.0-fips 29 Mar 2010    debug1: Reading configuration data /etc/ssh/ssh_config    debug1: Applying options for *    debug1: Connecting to [server] [xxx.xx.xx.xxx] port 22.    debug1: Connection established.    debug1: permanently_set_uid: 0/0    debug1: identity file /root/.ssh/identity type -1    debug1: identity file /root/.ssh/identity-cert type -1    debug1: identity file /root/.ssh/id_rsa type 1    debug1: identity file /root/.ssh/id_rsa-cert type -1    debug1: identity file /root/.ssh/id_dsa type -1    debug1: identity file /root/.ssh/id_dsa-cert type -1    debug1: Remote protocol version 2.0, remote software version OpenSSH_4.6    debug1: match: OpenSSH_4.6 pat OpenSSH_4*    debug1: Enabling compatibility mode for protocol 2.0    debug1: Local version string SSH-2.0-OpenSSH_5.3    debug1: SSH2_MSG_KEXINIT sent    debug1: SSH2_MSG_KEXINIT received    debug1: kex: server->client aes128-ctr hmac-md5 none    debug1: kex: client->server aes128-ctr hmac-md5 none    debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent    debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP    debug1: SSH2_MSG_KEX_DH_GEX_INIT sent    debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY    debug1: Host '[server]' is known and matches the RSA host key.    debug1: Found key in /root/.ssh/known_hosts:5    debug1: ssh_rsa_verify: signature correct    debug1: SSH2_MSG_NEWKEYS sent    debug1: expecting SSH2_MSG_NEWKEYS    debug1: SSH2_MSG_NEWKEYS received    debug1: SSH2_MSG_SERVICE_REQUEST sent    debug1: SSH2_MSG_SERVICE_ACCEPT received    debug1: **Authentications that can continue: password,**    debug1: Next authentication method: password    root@[server] password:   

What I'm expecting:

debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,password because I have set this on my sshd_config:    

Some relevant lines from my sshd_config file:

PermitRootLogin yes    PermitRootLogin without-password    RSAAuthentication yes    PubkeyAuthentication yes    

I don't know if this is a firewall issue or server side

My sshd_config file:

ListenAddress 199.xx.xx.xx   Protocol 2   SyslogFacility AUTHPRIV   PermitRootLogin yes   PermitRootLogin without-password   RSAAuthentication yes   PubkeyAuthentication yes   AuthorizedKeysFile .ssh/authorized_keys   PasswordAuthentication yes   ChallengeResponseAuthentication no   GSSAPIAuthentication yes   GSSAPICleanupCredentials yes   UsePAM yes  

Also, some relevant permissions:

drwx------.  2 root root     4096 Sep  1 09:12 .ssh    -rw-r--r--.  1 root root      235 Aug 12 16:00 .ssh/authorized_keys    

Even tried to forcefully use publickey authentication but is still user password as authentication

ssh -2 -vvv -o PubkeyAuthentication=yes -o RSAAuthentication=yes -o   PasswordAuthentication=yes -o PreferredAuthentications=publickey [server]            debug3: Wrote 64 bytes for a total of 1277    debug1: Authentications that can continue: password,    debug3: start over, passed a different list password,    debug3: preferred publickey    debug1: No more authentication methods to try.          var/log/secure    Sep  3 10:43:09 crewtest sshd[47353]: Connection closed by xx.xx.xx.xx  

nginx proxy_no_cache doesn't work

Posted: 30 Mar 2021 06:02 PM PDT

I am trying to set up a caching reverse proxy with 2 conditions; if either is met, it shouldn't be storing the file from origin:

  • partial-content request
  • request with query string

As configured below, I got nginx to NOT store partial-content requests.
However, it is still storing requests with query string. What am I missing here?

curl -r 1000-2000 http://edge.com/data/abcdef.dat [OK. No file stored.]
wget http://edge.com/data/abcdef.dat?query=string [Not OK. abcdef.dat stored on edge.]

  location /data/ {      set $originuri /origin$uri$is_args$args;      errorpage 404 = $originuri;  }    location /origin/ {      proxy_pass http://origin.com:1111;      proxy_store /mnt1/edge/store;      proxy_temp_path /mnt1/edge/tmp;      proxy_set_header If-Range $http_if_range;      proxy_set_header Range $http_range;      proxy_no_cache $http_range $http_if_range $is_args;  }  

Apache segfaults on Wordpress admin page

Posted: 30 Mar 2021 10:03 PM PDT

I can access the frontend of a new Wordpress setup without issue. I can also log in with a user account without issue. However, if I try to log in with an admin account I get a 500 error about 70-80% of the time.

If I do manage to get through to the WP dashboard, any page within the admin panel will cause the error only /sometimes/ - I can't find a set way to reproduce the error.

Apache's error log shows a segmentation fault for each of these 500 errors.

I started my investigation with Wordpress:

  • Disabled all WP plugins
  • Reset the theme to default
  • Removed .htaccess and hit the php pages directly

The error sill intermittently occured.

I figured my next step should be to get a core dump of the Apache thread that died to see if there are any clues, but I can't get it to dump.

I'm running Debian 6.0.4 and have followed the instructions in /usr/share/doc/apache2.2-common/README.backtrace, which state:

1) Install the packages apache2-dbg libapr1-dbg libaprutil1-dbg gdb.

2) Add "CoreDumpDirectory /var/cache/apache2" to your apache configuration.

3) Execute as root: /etc/init.d/apache2 stop ulimit -c unlimited /etc/init.d/apache2 start

4) Do whatever it takes to reproduce the crash. There should now be the file /var/cache/apache2/core .

Still, the core files aren't being dumped, and Apache's error log doesn't have (core dumped) in the segmentation fault lines.

Any ideas?

No comments:

Post a Comment