Friday, July 8, 2022

Recent Questions - Server Fault

Recent Questions - Server Fault


How to install latest docker via spinnaker pipeline onto GCE VM

Posted: 08 Jul 2022 08:13 AM PDT

I'm creating a GCE VM from a spinnaker pipeline to deploy an app via docker-compose.

When I try to get docker installed on the VM in the way shown in the image, it installs docker version 1.

Is there a way for me to get the latest version of docker installed on the VMs deployed via a Spinnaker pipeline?

enter image description here

Downloading large excel file in angular using xlsx library(net::ERR_INCOMPLETE_CHUNKED_ENCODING 200)

Posted: 08 Jul 2022 08:11 AM PDT

I am doing an excel file download using angular + springboot and the api returns application/json file. The transfer-encoding:chunked is always enabled and i see responses return http 1.1 .

I posted this on stackoverflow but haven't gotten any solution. Please help as I am stuck for quite sometime on what to do.

worker_processes  1;    events {   worker_connections  1024;  }    http {    gzip  on;    gzip_http_version 1.0;    gzip_comp_level 2;   gzip_min_length 1100;   gzip_buffers     4 8k;   gzip_proxied any;  gzip_types  # text/html is always compressed by HttpGzipModule  text/css  text/javascript  application/x-javascript  text/xml  text/plain  text/x-component  application/javascript  application/json  application/xml  application/rss+xml  font/truetype  font/opentype  application/vnd.ms-fontobject  image/svg+xml;  gzip_static on;  {"     gzip_proxied  expired no-cache no-store private auth;     gzip_disable  "MSIE [1-6]\.";     gzip_vary on;     server {       listen 8080;       server_name  localhost;       root   /usr/share/nginx/html;     index  index.html index.htm;     include /etc/nginx/mime.types;      location / {       try_files $uri $uri/ /index.html;    }  

Debugging why the packets get dropped

Posted: 08 Jul 2022 08:10 AM PDT

I've started on stackoverflow, then after some investigation asked on superuser, because it looks like network or kernel (mis)configuration, but probably I need some seasoned network engineer to help:)

I'm mangling packets with help of NFQUEUE, there's only single rule in iptables:

iptables -A OUTPUT -p tcp -m tcp --dport 80 -j NFQUEUE --queue-num 1  

The script is simple (python, requires pypacker):

import re, time    from pypacker import interceptor  from pypacker.layer3 import ip  from pypacker.layer4 import tcp    def handle_packet(ll_data, ll_proto_id, data, ctx, *args):      ip_part = ip.IP(data)      tcp_part=ip_part[tcp.TCP]        if body := tcp_part.body_bytes:                  new_body=re.sub(b'/[a-z]+', b'/uuid', tcp_part.body_bytes)          if new_body != tcp_part.body_bytes:              tcp_part.body_bytes=new_body              data=ip_part.bin() # Will be returned        return data, interceptor.NF_ACCEPT    ictor = interceptor.Interceptor()  ictor.start(handle_packet, queue_ids=[0, 1, 2])  time.sleep(180)  ictor.stop()  

Testing with curl:

curl http://httpbin.org/ip  

When I don't change the length of the packet, for example, requesting /ipip, the response is returned. When I change the length, the response packet (200 OK) is dropped (I see it in tcpdump and nfqueue if I configure the PREROUTING rule).

Can somebody give an idea where to look for the solution? dropwatch unfortunately says only this:

1 drops at _einittext+1dacb95a (0xffffffffa002584c)  

I've tried on different servers (gentoo, ubuntu), with different kernels and settings (tried to play with connection tracking, disabling rp_filter etc), nothing makes it work.

Site to Site communication over Azure VPN

Posted: 08 Jul 2022 08:08 AM PDT

I have set up a VPN Gateway on Azure using the following tutorial -

https://docs.microsoft.com/en-us/azure/vpn-gateway/tutorial-site-to-site-portal

I have 2 sites, one with a DrayTek router and one with a UniFi Security Gateway. Both routers are connected to the VPN.

The issue is the 2 sites can't talk to each other. I cant work out if its the Firewalls on the routers, some routing on the VPN Gateway needs setting up or something else.

Has anyone accomplished this before? Longer term I want to connect 5 sites to the Azure VPN so they can all talk to each other.

Adding Microsoft Power Virtual Agents to existing conversation

Posted: 08 Jul 2022 07:35 AM PDT

I'm attempting to make a chat bot for our IT support chat, just basics like when a user reports their machine isn't switching on a bot will instruct them to try pushing the cables back in before escalating it to the IT team.

I've created a bot on Power Virtual Agents and I'm trying to add it to the specific IT Support chat in order to respond to messages.

However, I can only seem to talk to it via a direct conversation with the bot itself. The documentation is a bit confusing and it seems as if this isn't possible.

Does anyone have any experience utilising these bots in a Team conversation?

Cannot access specific website from my arch desktop

Posted: 08 Jul 2022 07:23 AM PDT

There is a specific website www.balaye.net that I cannot reach from my arch desktop when the rest of the internet works with no problem.

I have tried to use two differents WiFi networks with no success. Moreover, I can reach the url from other devices running Windows or android.

Here is my config but I am not sure what I need to provide:

File: /etc/resolv.conf

# Generated by NetworkManager  nameserver 192.168.214.185    

When using nslookup www.balaye.net

Server:         192.168.214.185  Address:        192.168.214.185#53    Non-authoritative answer:  www.balaye.net  canonical name = https://pietrodito.github.io.  Name:   https://pietrodito.github.io  Address: 185.199.111.153  Name:   https://pietrodito.github.io  Address: 185.199.108.153  Name:   https://pietrodito.github.io  Address: 185.199.110.153  Name:   https://pietrodito.github.io  Address: 185.199.109.153  Name:   https://pietrodito.github.io  Address: 2606:50c0:8001::153  Name:   https://pietrodito.github.io  Address: 2606:50c0:8000::153  Name:   https://pietrodito.github.io  Address: 2606:50c0:8003::153  Name:   https://pietrodito.github.io  Address: 2606:50c0:8002::153  

When using dig www.balaye.net

; <<>> DiG 9.18.4 <<>> www.balaye.net  ;; global options: +cmd  ;; Got answer:  ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 29574  ;; flags: qr rd ra ad; QUERY: 1, ANSWER: 5, AUTHORITY: 0, ADDITIONAL: 0    ;; QUESTION SECTION:  ;www.balaye.net.                        IN      A    ;; ANSWER SECTION:  www.balaye.net.         159     IN      CNAME   https://pietrodito.github.io.  https://pietrodito.github.io. 159 IN    A       185.199.110.153  https://pietrodito.github.io. 159 IN    A       185.199.109.153  https://pietrodito.github.io. 159 IN    A       185.199.111.153  https://pietrodito.github.io. 159 IN    A       185.199.108.153    ;; Query time: 10 msec  ;; SERVER: 192.168.214.185#53(192.168.214.185) (UDP)  ;; WHEN: Fri Jul 08 16:19:31 CEST 2022  ;; MSG SIZE  rcvd: 138    

And when trying to ping www.balaye.net:

ping: www.balaye.net: System error  

And when trying ping www.google.com:

ING www.google.com (216.58.214.164) 56(84) bytes of data.  64 bytes from par10s42-in-f4.1e100.net (216.58.214.164): icmp_seq=1 ttl=115 time=53.7 ms  64 bytes from par10s42-in-f4.1e100.net (216.58.214.164): icmp_seq=2 ttl=115 time=47.6 ms  64 bytes from par10s42-in-f4.1e100.net (216.58.214.164): icmp_seq=3 ttl=115 time=58.4 ms  64 bytes from par10s42-in-f4.1e100.net (216.58.214.164): icmp_seq=4 ttl=115 time=60.6 ms  ^C  --- www.google.com ping statistics ---  4 packets transmitted, 4 received, 0% packet loss, time 3005ms  rtt min/avg/max/mdev = 47.585/55.077/60.585/4.986 ms    

How to filter by service groups in the monit web frontend?

Posted: 08 Jul 2022 06:33 AM PDT

Monit has a feature called "service groups" (https://mmonit.com/monit/documentation/monit.html#SERVICE-GROUPS). It allows you to filter by groups using the CLI. But is it possible to use this filter on the web frontend?

How can I use a value returned from one dataset to build a query for a different dataset, in SSRS?

Posted: 08 Jul 2022 06:56 AM PDT

I'm using SQL Server Report Builder 3.0 to create a report.

I have two tables and I want to look up a value in one table and then use that to select a value from the second table. Example as follows:

Table : Staff

| staff_id | Surname |

| -------- | ------- |

| 1 | Smith |

| 2 | Jones |

Table: Courses

| Teacher | Course |

| -------- | ------- |

| 1 | French |

| 2 | German |

Dataset1: SELECT staff_id FROM staff WHERE surname='Smith'

Dataset2: SELECT course FROM courses WHERE teacher = {staff_id from Dataset1}

But I can't figure out how to pass the value from Dataset1 to the query in Dataset2

I've tried getting the value into a Variable called teacher_id using a lookup against the other dataset and then the query for Dataset2 becomes

SELECT course FROM courses WHERE teacher = @teacher_id

but I get the error

The expression used for the calculated field 'teacher_id' includes an aggregate, RowNumber, RunningValue, Previous or lookup function. Aggregate, RowNumber, RunningValue, Previous and lookup functions cannot be used in calculated field expressions.

Is it possible to do what I need to?

Many thanks

Edit: Apologies - I can't seem to format the tables correctly, but they are just 2 cols by 2 rows so should be easily understood

Windows 10 Offline files synchronization monitoring

Posted: 08 Jul 2022 06:03 AM PDT

There is a class in WMI ROOT\CIMV2 named Win32_OfflineFilesHealth with property LastSuccessfulSyncTime. But the class is having no instances even if I have Offline files enabled and I'm using it.

What I'm doing wrong? Is there any other way to get date of last successful synchronization of my offline files in machine way?

Why configure non-default cluster domain

Posted: 08 Jul 2022 05:42 AM PDT

I'm considering to deploy Kubernetes cluster to semi-production usage. So far I've got a good grasp, but there's one thing that I don't fully understand yet. By default Kubernetes uses cluster.local domain for its internal DNS. It is possible at installation time to change that default. I own a domain and I can configure it to cluster.mydomain.com, but what would be a good reason to do so? I'm asking because I expect it to be hard to change later, so I want to properly configure it now.

Any way to bypass Barracuda spam filter for specific addresses?

Posted: 08 Jul 2022 06:02 AM PDT

We are a medium-sized company planning to switch our spam filter service. Right now, our computing solution provider hosts a Barracuda service on its server. Our emails are routed there and filtered before being sent to Microsoft 365.

Our plan is to switch to Defender for Office 365. We could just change our MX records right away to send everything straight to Microsoft 365, but I'd rather do a smoother transition by first having only a couple of email addresses bypassing the Barracuda. This way we'll see how Defender reacts to unfiltered emails.

Is there any way to bypass spam filtering in Barracuda for only specific email addresses? Our provider says it's impossible, but somehow I feel like that should be doable. Or is there any other way to have emails go straight to Microsoft 365 for some addresses?

Edit: Or can Barracuda not filter emails for specific email addresses?

Thanks in advance for your answers (and patience, this is my first question here)

RAID Controller -- SAS Expander -- SATA drives. What max throughput can be expected?

Posted: 08 Jul 2022 05:59 AM PDT

I am building fast recovery solution. And there is a stuck with the choosing of the RAID controller.

I have a chain:

  • 24 SATA SSD drives each performing at 4Gbit/s. Total write throughput is 96Gbit/s.
  • SAS Expander Intel RES3TV360. 2x4 SAS IN. 7x4 SAS/SATA OUT. 12/6/3/1.5 Gbit/s per port.
  • RAID Controller Intel RS3P4MF088F. 1x8 SAS/SATA OUT. 12/6/3/1.5 Gbit/s per port.

In both manuals for RAID Controller and SAS Expander stated:

  • Up to 12Gbit/s per SAS port
  • Up to 6Gbit/s per SATA port
  • STP (SATA tunneling) via SAS ports supported

Drives are organized in RAID 10 array.

The question is: Which top write speed I can expect and where is a bottle neck in this chain?

I can explain my concern:

Neither in RAID Controller nor in the SAS Expander guidelines stated at which speed RAID Controller and SAS Expander will communicate via 8x lines between each other when only SATA drives are connected to SAS Expander.

  • If it will be 12Gbit/s then 8 x 12Gbit/s = 96Gbit/s - and it is fine. (It results in 48Gbit/s write speed on RAID 10 - and it is acceptable)

  • If it will be just 6Gbit/s between RAID Controller and SAS Expander then 8 x 6Gbit/s = 48Gbit/s - and it is not enough. (It results in just 24Gbit/s write speed on RAID 10 - and it is less than expected for my solution)

Thanks in advance.

What are the consequences of RAM speed to high on a Proliant DL385 G7?

Posted: 08 Jul 2022 07:51 AM PDT

Rendering the 3D model of an optical lens or mirror with openSCAD can require huge amount of RAM, and for this use, I have maxed the RAM on a Proliant DL385 G7 but the RAMs I got were PC-14900: 1866MHz instead of the required PC-10600: 1333MHz).

Should it be a cause for a real problem or is it just the speed reporting that might be hindered ? The memory test at boot after the change reported no problem, but iLO claims that my RAM run at 9MHz...

What would be a good test for the effective speed ?

enter image description here.

Difficulty running crontab startup script

Posted: 08 Jul 2022 06:46 AM PDT

I have the following crontab job to run on reboot:

PATH=/snap/bin:/usr/bin  @reboot run_me_startup.sh >> $HOME/startup_run_log.txt 2>&1  

My docker install has placed the files in /snap/bin (hence my PATH statement). My run_me_startup.sh script is as follows:

echo "Startup Script"  sleep 240  echo "Running..."  sudo service apache2 stop && sudo service nginx stop  sudo chown $(whoami):$(whoami) /var/run/docker.sock  /snap/bin/docker network create dbnet  /snap/bin/docker network create nginx_network  export aws_access_key_id="secret"  export aws_secret_access_key="secret"  cd $HOME/The6ix/The6ixDjango && pwd && /snap/bin/docker-compose -f docker-compose.prod.yml down --remove-orphans  {  {  rm -rf $HOME/The6ix/The6ixDjango && cd $HOME/The6ix && git clone https://secret@github.com/cooneycw/The6ixDjango.git   } ||  {  cd $HOME/The6ix && git clone https://secret@github.com/cooneycw/The6ixDjango.git  }  }  ls  cp $HOME/database.env $HOME/The6ix/The6ixDjango/database.env  

I inserted the sleep 240 to ensure that the docker service was started before the code executes. My output log is as follows:

Startup Script  Failed to stop nginx.service: Unit nginx.service not loaded.  chown: cannot access '/var/run/docker.sock': No such file or directory  /home/ubuntu/run_me_startup.sh: 4: docker: not found  /home/ubuntu/run_me_startup.sh: 5: docker: not found  /home/ubuntu/The6ix/The6ixDjango  /home/ubuntu/run_me_startup.sh: 8: docker-compose: not found  Cloning into 'The6ixDjango'...  The6ixDjango  /home/ubuntu/run_me_startup.sh: 19: docker-compose: not found  /home/ubuntu/run_me_startup.sh: 20: docker-compose: not found  /home/ubuntu/run_me_startup.sh: 21: docker-compose: not found  /home/ubuntu/run_me_startup.sh: 22: docker-compose: not found  /bin/sh: 1: run_me_startup.sh: not found  /bin/sh: 1: run_me_startup.sh: not found  /bin/sh: 1: run_me_startup.sh: not found  /bin/sh: 1: run_me_startup.sh: not found  /bin/sh: 1: run_me_startup: not found  /bin/sh: 1: run_me_startup: not found  /bin/sh: 1: run_me_startup.sh: not found  

I don't understand why I am getting the docker: not found errors with the PATH statement in my crontab. And should I be concerned about the /bin/sh notes at the bottom of my log? Have I somehow triggered my script to run twice? Linux novice...I appreciate your generosity!

Allowing file types in specific folder apache2 vhost file

Posted: 08 Jul 2022 05:22 AM PDT

I've an apache2 virtual host file, which has several sites in it (yes I know.. a single file, work with me here). Before any of the VirtualHost stuff I've specified some directory and file rules like so.

# Specifically allow exception for well-known (Lets-Encrypt needs this!)      <Directory ~ "/\.well-known/">          Require all granted      </Directory>  

however I've also blocked some file types

 # Don't allow certain file extensions      <Files ~ "\.(txt|md|cfg|dist|dir|log|json|lock)$|^\.">          Require all denied      </Files>  

What T'm trying to do is only block those files everywhere except the .well-known folder so created an override like so

    <Files ~ "/\.well-known//\.(txt|md|cfg|dist|dir|log|json|lock)$|^\.">          Require all granted      </Files>  

I've googled this until blue in the face (probably not asking the right question) any help would be appreciated as the syntax is obviously wrong and it still blocks json files in the .well-known folder.

Importing cert from Ubuntu certificate authority to Windows Server 2022

Posted: 08 Jul 2022 07:01 AM PDT

In our environment we have recently created a Ubuntu based CA, we want to utilise this on our windows environments.

I was following a tutorial for how to implement the CA and I don't know if I'm just not googling the right thing but I couldn't find anything related to what I was doing and it feels like I'm going round in circles.

I've followed this tutorial https://www.digitalocean.com/community/tutorials/how-to-set-up-and-configure-a-certificate-authority-ca-on-ubuntu-20-04, and am at the point where I've created my ca.crt and I'm ready to give it out but when trying to link any cert in windows it's trying to retrieve it from local storage and I can't see anywhere on windows to specify the CA cert server name and get this all working.

I've seen a ton of information on setting up a CA between machines on linux and a linux based CA, or a windows ca on linux clients, etc

I have no experience setting a CA up as our last one was built without any documentation and is now broken

Any help would be appreciated and sorry if this is a stupid question lmao

GKE Cluster with Dataplane v2 services unavailable when 1 of its pods goes down

Posted: 08 Jul 2022 07:26 AM PDT

I'm struggling with an issue in our GKE cluster which is setup to use container-native loadbalancing for our ingresses, running on the dataplane v2. The issue appears to be focused at the service level, though, which I believe ultimately causes issues at the LB level.

Basically, for any service, whether ones we manage or ones managed by Google, if a pod is lost, the service is inaccessible. I would assume a misconfiguration on our part, but the same happens even with the kubedns service, which is hugely impactful.

For the most part, I don't see any major errors in the cilium logs, which seems like the most logical place for this issue to be coming from. It does provide a warning that it doesn't have permissions to annotate nodes, but I can't find any info on whether that has any bearing on our issues.

Download Sharepoint Online files via Powershell

Posted: 08 Jul 2022 08:21 AM PDT

I've been handed a project, where our marketing dept produces images and each workstation & laptop need to download those images on startup or login. (wallpaper folder, screen saver images, etc etc)

I'd like to use Sharepoint Online to store a publicly accessible (no login required) folder with the images, and then use Powershell to grab the contents.

Every example that I find requires a snapin or module for this.

Does anyone know of a straight-forward way of vanilla Powershell copying the contents of a publicly available SO folder to local disk?

Thanks!

Active Directory/LDAP replication Windows/Ubuntu

Posted: 08 Jul 2022 08:09 AM PDT

I am trying to setup replication between a Windows AD and OpenLDAP on Ubuntu.

Access to the Windows AD server seems to work OK, the OpenLDAP on Ubuntu also seems to work, however I am getting stuck on setting up the replication between both - I am new to AD/LDAP and there might be some concepts I'm missing.

I am able to list users on the remote (Windows) AD:

ldapsearch -x -h 192.168.1.200 -D 'CN=LDAP OpenVPN,CN=Users,DC=DOMAIN,DC=NET' -w 'xxx' -b "DC=DOMAIN,DC=NET" cn  

I setup the replication using the following config:

dn: cn=module{0},cn=config  changetype: modify  add: olcModuleLoad  olcModuleLoad: syncrepl    dn: olcDatabase={1}mdb,cn=config  changetype: modify  add: olcSyncRepl  olcSyncRepl: rid=001    provider=ldap://192.168.1.200:389/    type=refreshAndPersist    retry="30 5 300 3"    interval=00:00:05:00    searchbase="CN=Users,DC=DOMAIN,DC=NET"    bindmethod=simple    binddn="CN=LDAP OpenVPN,CN=Users,DC=DOMAIN,DC=NET"    credentials="xxx"    add: olcUpdateRef  olcUpdateRef: ldap://192.168.1.200  

And applied using:

sudo ldapadd -Y EXTERNAL -H ldapi:/// -f syncrepl.ldif  
$ sudo ldapadd -Y EXTERNAL -H ldapi:/// -f syncrepl.ldif  SASL/EXTERNAL authentication started  SASL username: gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth  SASL SSF: 0  modifying entry "cn=module{0},cn=config"    modifying entry "olcDatabase={1}mdb,cn=config"  

However, replication seems to fail with the following error:

[16-02-2022 22:00:20] slapd debug  slap_client_connect: URI=ldap://192.168.1.200:389/ DN="cn=admin,dc=domain,dc=net" ldap_sasl_bind_s failed (49)  [16-02-2022 22:00:20] slapd debug  do_syncrepl: rid=001 rc 49 quitting  [16-02-2022 22:00:21] slapd debug  slap_client_connect: URI=ldap://192.168.1.200:389/ DN="cn=openvpnldap,dc=domain,dc=net" ldap_sasl_bind_s failed (49)  [16-02-2022 22:00:21] slapd debug  do_syncrepl: rid=001 rc 49 quitting  [16-02-2022 22:00:22] slapd debug  do_syncrep2: rid=001 LDAP_RES_SEARCH_RESULT (12) Critical extension is unavailable  [16-02-2022 22:00:22] slapd debug  do_syncrep2: rid=001 (12) Critical extension is unavailable  [16-02-2022 22:00:22] slapd debug  do_syncrepl: rid=001 rc -2 quitting  [16-02-2022 22:00:22] slapd debug  do_syncrep2: rid=001 LDAP_RES_SEARCH_RESULT (12) Critical extension is unavailable  [16-02-2022 22:00:22] slapd debug  do_syncrep2: rid=001 (12) Critical extension is unavailable  [16-02-2022 22:00:22] slapd debug  do_syncrepl: rid=001 rc -2 quitting  

To give an idea of what I am trying to achieve:

  • we have an on-premise network (192.168.1.0/24) with a Windows based Active Directory running on it
  • we have a Google Cloud VPC network (10.0.0.0/8) with some resources running on it
  • we have an IPSec tunnel running between the on-premise network and GCP network. Routes are properly setup and everything works like a charm
  • we would like to access our on premise LDAP (192.168.1.200) from a VM, within Google Cloud VPC network - the point is to allow users from this AD to login to an OpenVPN server located on this VM
  • we want authentication to keep working if we loose access to our on-premise network. To achieve this, the idea was to run a "proxy/cache" OpenLDAP on the same VM

Thanks a lot!

Why are my dns options specified in /etc/network/interfaces not parsed by resolvconf?

Posted: 08 Jul 2022 06:07 AM PDT

I have a Ubuntu 14.04 server which has two IP addresses specified on a single interface. They are defined in /etc/network/interfaces like so:

auto em1  iface em1 inet static      address 192.168.1.2      netmask 255.255.255.0      gateway 192.168.1.1      # dns-* options are implemented by the resolvconf package, if installed      dns-nameservers 192.168.1.10 192.168.1.11    iface em1 inet static      address 192.168.1.3      netmask 255.255.255.0  

As per the debian wiki I have specified multiple IP addresses in the modern style by simply declaring multiple iface stanzas referring to the same interface.

However, when the networking on this server comes up, /etc/resolv.conf is empty but for the standard header:

# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)  #     DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN  

And all DNS lookups are therefore failing as it does not try to contact the nameservers.

I've clearly specified the local nameservers to use in the dns-nameservers line in /etc/network/interfaces above. Why are they not in resolv.conf?

Actual IP addresses have been changed to protect the innocent

New-ADuser force to change password at next logon

Posted: 08 Jul 2022 07:04 AM PDT

I want to use Powershell New-ADuser to add new user in that, new user must have change password at first time logon. I find the attribute "-ChangePasswordAtLogon" but when I use it, new user still not enable option change password at first time login.

New-ADUser -Name "Nguyen Van Nam" -GivenName "Nguyen Van" -Surname Nam -SamAccountName namnv -UserPrincipalName namnv@queencenter.local -ChangePasswordAtLogon 1 -AccountPassword (ConvertTo-SecureString "P@ssw0rd" -AsPlainText -Force) -PassThru | Enable-ADAccount  

in firewalld port 80 is closed but nmap shows the port is open, and I can connect to it

Posted: 08 Jul 2022 05:07 AM PDT

my linux environment is fedora 27, httpd is running, and firewall-cmd --list-all shows

FedoraWorkstation (active)    target: default    icmp-block-inversion: no    interfaces: wlp3s0    sources:     services: dhcpv6-client ssh samba-client mdns    ports: 1025-65535/udp 1025-65535/tcp    protocols:     masquerade: no    forward-ports:     source-ports:     icmp-blocks:     rich rules:   

Although the http service or port 80 is not allowed, nmap shows that

Starting Nmap 7.60 ( https://nmap.org ) at 2017-11-25 18:55 PST  Nmap scan report for 10.0.0.15  Host is up (0.000052s latency).    PORT   STATE SERVICE  80/tcp open  http    Nmap done: 1 IP address (1 host up) scanned in 0.33 seconds  

and actually I can connect to the server using browser

"systemctl status httpd" shows no errors but "systemctl status firewalld" shows following errors

Nov 25 18:34:44 localhost.localdomain firewalld[3310]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w --table filter --delete FORWARD --out-interface virbr0 --jump REJECT' failed:  Nov 25 18:34:44 localhost.localdomain firewalld[3310]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w --table filter --delete FORWARD --in-interface virbr0 --jump REJECT' failed:  Nov 25 18:34:44 localhost.localdomain firewalld[3310]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w --table filter --delete INPUT --in-interface virbr0 --protocol udp --destination-port 53 --jump ACCEPT'   Nov 25 18:34:44 localhost.localdomain firewalld[3310]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w --table filter --delete INPUT --in-interface virbr0 --protocol tcp --destination-port 53 --jump ACCEPT'   Nov 25 18:34:44 localhost.localdomain firewalld[3310]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w --table filter --delete OUTPUT --out-interface virbr0 --protocol udp --destination-port 68 --jump ACCEPT  Nov 25 18:34:44 localhost.localdomain firewalld[3310]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w --table filter --delete INPUT --in-interface virbr0 --protocol udp --destination-port 67 --jump ACCEPT'   Nov 25 18:34:44 localhost.localdomain firewalld[3310]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w --table filter --delete INPUT --in-interface virbr0 --protocol tcp --destination-port 67 --jump ACCEPT'   Nov 25 18:43:17 localhost.localdomain systemd[1]: Reloading firewalld - dynamic firewall daemon.  Nov 25 18:43:17 localhost.localdomain systemd[1]: Reloaded firewalld - dynamic firewall daemon.  Nov 25 18:43:17 localhost.localdomain firewalld[3310]: WARNING: FedoraServer: INVALID_SERVICE: cockpit  

if I make same situation in my virtual machine which runs centos7, firewalld works as I want. while running httpd in vm, if I add http service in the firewall rule, then I can connect otherwise I cannot. but in fedora, I don't know what is wrong.

what I was trying to do was port forwarding from host port 80/tcp to my vm port 80/tcp. I realized that port forwading was not working and neither add-service, or add-port in firewall-cmd. How can I fix the problem?

Although iptables is disabled, I post output of iptables -L here. 192.168.122.0/24 is network for my vm

Chain INPUT (policy ACCEPT)  target     prot opt source               destination           ACCEPT     udp  --  anywhere             anywhere             udp dpt:domain  ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:domain  ACCEPT     udp  --  anywhere             anywhere             udp dpt:bootps  ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:bootps  ACCEPT     all  --  anywhere             anywhere             ctstate RELATED,ESTABLISHED  ACCEPT     all  --  anywhere             anywhere              INPUT_direct  all  --  anywhere             anywhere              INPUT_ZONES_SOURCE  all  --  anywhere             anywhere              INPUT_ZONES  all  --  anywhere             anywhere              DROP       all  --  anywhere             anywhere             ctstate INVALID  REJECT     all  --  anywhere             anywhere             reject-with icmp-host-prohibited    Chain FORWARD (policy ACCEPT)  target     prot opt source               destination           ACCEPT     all  --  anywhere             192.168.122.0/24     ctstate RELATED,ESTABLISHED  ACCEPT     all  --  192.168.122.0/24     anywhere              ACCEPT     all  --  anywhere             anywhere              REJECT     all  --  anywhere             anywhere             reject-with icmp-port-unreachable  REJECT     all  --  anywhere             anywhere             reject-with icmp-port-unreachable  ACCEPT     all  --  anywhere             anywhere             ctstate RELATED,ESTABLISHED  ACCEPT     all  --  anywhere             anywhere              FORWARD_direct  all  --  anywhere             anywhere              FORWARD_IN_ZONES_SOURCE  all  --  anywhere             anywhere              FORWARD_IN_ZONES  all  --  anywhere             anywhere              FORWARD_OUT_ZONES_SOURCE  all  --  anywhere             anywhere              FORWARD_OUT_ZONES  all  --  anywhere             anywhere              DROP       all  --  anywhere             anywhere             ctstate INVALID  REJECT     all  --  anywhere             anywhere             reject-with icmp-host-prohibited    Chain OUTPUT (policy ACCEPT)  target     prot opt source               destination           ACCEPT     udp  --  anywhere             anywhere             udp dpt:bootpc  OUTPUT_direct  all  --  anywhere             anywhere                Chain FORWARD_IN_ZONES (1 references)  target     prot opt source               destination           FWDI_FedoraWorkstation  all  --  anywhere             anywhere            [goto]   FWDI_FedoraWorkstation  all  --  anywhere             anywhere            [goto]     Chain FORWARD_IN_ZONES_SOURCE (1 references)  target     prot opt source               destination             Chain FORWARD_OUT_ZONES (1 references)  target     prot opt source               destination           FWDO_FedoraWorkstation  all  --  anywhere             anywhere            [goto]   FWDO_FedoraWorkstation  all  --  anywhere             anywhere            [goto]     Chain FORWARD_OUT_ZONES_SOURCE (1 references)  target     prot opt source               destination             Chain FORWARD_direct (1 references)  target     prot opt source               destination             Chain FWDI_FedoraWorkstation (2 references)  target     prot opt source               destination           FWDI_FedoraWorkstation_log  all  --  anywhere             anywhere              FWDI_FedoraWorkstation_deny  all  --  anywhere             anywhere              FWDI_FedoraWorkstation_allow  all  --  anywhere             anywhere              ACCEPT     icmp --  anywhere             anywhere                Chain FWDI_FedoraWorkstation_allow (1 references)  target     prot opt source               destination             Chain FWDI_FedoraWorkstation_deny (1 references)  target     prot opt source               destination             Chain FWDI_FedoraWorkstation_log (1 references)  target     prot opt source               destination             Chain FWDO_FedoraWorkstation (2 references)  target     prot opt source               destination           FWDO_FedoraWorkstation_log  all  --  anywhere             anywhere              FWDO_FedoraWorkstation_deny  all  --  anywhere             anywhere              FWDO_FedoraWorkstation_allow  all  --  anywhere             anywhere                Chain FWDO_FedoraWorkstation_allow (1 references)  target     prot opt source               destination             Chain FWDO_FedoraWorkstation_deny (1 references)  target     prot opt source               destination             Chain FWDO_FedoraWorkstation_log (1 references)  target     prot opt source               destination             Chain INPUT_ZONES (1 references)  target     prot opt source               destination           IN_FedoraWorkstation  all  --  anywhere             anywhere            [goto]   IN_FedoraWorkstation  all  --  anywhere             anywhere            [goto]     Chain INPUT_ZONES_SOURCE (1 references)  target     prot opt source               destination             Chain INPUT_direct (1 references)  target     prot opt source               destination             Chain IN_FedoraWorkstation (2 references)  target     prot opt source               destination           IN_FedoraWorkstation_log  all  --  anywhere             anywhere              IN_FedoraWorkstation_deny  all  --  anywhere             anywhere              IN_FedoraWorkstation_allow  all  --  anywhere             anywhere              ACCEPT     icmp --  anywhere             anywhere                Chain IN_FedoraWorkstation_allow (1 references)  target     prot opt source               destination           ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:ssh ctstate NEW  ACCEPT     udp  --  anywhere             anywhere             udp dpt:netbios-ns ctstate NEW  ACCEPT     udp  --  anywhere             anywhere             udp dpt:netbios-dgm ctstate NEW  ACCEPT     udp  --  anywhere             224.0.0.251          udp dpt:mdns ctstate NEW  ACCEPT     udp  --  anywhere             anywhere             udp dpts:blackjack:65535 ctstate NEW  ACCEPT     tcp  --  anywhere             anywhere             tcp dpts:blackjack:65535 ctstate NEW    Chain IN_FedoraWorkstation_deny (1 references)  target     prot opt source               destination             Chain IN_FedoraWorkstation_log (1 references)  target     prot opt source               destination             Chain OUTPUT_direct (1 references)  target     prot opt source               destination    

output of "lsof -i -P -n|grep LISTEN" is

dnsmasq    1037 nobody    6u  IPv4   27561      0t0  TCP 192.168.122.1:53 (LISTEN)  cupsd      1788   root    9u  IPv6   37232      0t0  TCP [::1]:631 (LISTEN)  cupsd      1788   root   10u  IPv4   37233      0t0  TCP 127.0.0.1:631 (LISTEN)  httpd      2355   root    4u  IPv6   43072      0t0  TCP *:80 (LISTEN)  httpd      2358 apache    4u  IPv6   43072      0t0  TCP *:80 (LISTEN)  httpd      2359 apache    4u  IPv6   43072      0t0  TCP *:80 (LISTEN)  httpd      2360 apache    4u  IPv6   43072      0t0  TCP *:80 (LISTEN)  sshd       3070   root    5u  IPv4   50178      0t0  TCP *:22 (LISTEN)  sshd       3070   root    7u  IPv6   50180      0t0  TCP *:22 (LISTEN)  jupyter-n  3512   rhce    4u  IPv6   64019      0t0  TCP [::1]:8888 (LISTEN)  jupyter-n  3512   rhce    5u  IPv4   64020      0t0  TCP 127.0.0.1:8888 (LISTEN)  python3    3545   rhce   14u  IPv4   66283      0t0  TCP 127.0.0.1:40521 (LISTEN)  python3    3545   rhce   17u  IPv4   66287      0t0  TCP 127.0.0.1:49589 (LISTEN)  python3    3545   rhce   20u  IPv4   66291      0t0  TCP 127.0.0.1:48583 (LISTEN)  python3    3545   rhce   23u  IPv4   66295      0t0  TCP 127.0.0.1:39659 (LISTEN)  python3    3545   rhce   28u  IPv4   66300      0t0  TCP 127.0.0.1:35933 (LISTEN)  python3    3545   rhce   41u  IPv4   68637      0t0  TCP 127.0.0.1:44955 (LISTEN)  

and output of ss -tlpn is

State       Recv-Q Send-Q                                                            Local Address:Port                                                                           Peer Address:Port                LISTEN      0      100                                                                   127.0.0.1:49589                                                                                     *:*                   users:(("python3",pid=3545,fd=17))  LISTEN      0      32                                                                192.168.122.1:53                                                                                        *:*                   users:(("dnsmasq",pid=1037,fd=6))  LISTEN      0      128                                                                           *:22                                                                                        *:*                   users:(("sshd",pid=3070,fd=5))  LISTEN      0      5                                                                     127.0.0.1:631                                                                                       *:*                   users:(("cupsd",pid=1788,fd=10))  LISTEN      0      128                                                                   127.0.0.1:8888                                                                                      *:*                   users:(("jupyter-noteboo",pid=3512,fd=5))  LISTEN      0      100                                                                   127.0.0.1:44955                                                                                     *:*                   users:(("python3",pid=3545,fd=41))  LISTEN      0      100                                                                   127.0.0.1:35933                                                                                     *:*                   users:(("python3",pid=3545,fd=28))  LISTEN      0      100                                                                   127.0.0.1:48583                                                                                     *:*                   users:(("python3",pid=3545,fd=20))  LISTEN      0      100                                                                   127.0.0.1:40521                                                                                     *:*                   users:(("python3",pid=3545,fd=14))  LISTEN      0      100                                                                   127.0.0.1:39659                                                                                     *:*                   users:(("python3",pid=3545,fd=23))  LISTEN      0      128                                                                           *:80                                                                                        *:*                   users:(("httpd",pid=2360,fd=4),("httpd",pid=2359,fd=4),("httpd",pid=2358,fd=4),("httpd",pid=2355,fd=4))  LISTEN      0      128                                                                           *:22                                                                                        *:*                   users:(("sshd",pid=3070,fd=7))  LISTEN      0      5                                                                         [::1]:631                                                                                       *:*                   users:(("cupsd",pid=1788,fd=9))  LISTEN      0      128                                                                       [::1]:8888                                                                                      *:*                   users:(("jupyter-noteboo",pid=3512,fd=4))  

Apache <Location "/baz"> directive not matching

Posted: 08 Jul 2022 06:07 AM PDT

I fail to make <Location "/specific-url/"> works on Apache 2.2 (v2.2.22).

I want to set an HTTP header for the whole site, except for one specific URL path: /baz (and /baz/, /baz/quuz, etc.).

So I've set the following Apache configuration:

<VirtualHost _default_:443>      <Location "/">          Header set X-Foo "bar"      </Location>      <Location "/baz">          Header unset X-Foo      </Location>  </VirtualHost>  

Header X-Foo is indeed returned for non-/baz URL:

curl -D - https://example.com/qux -o /dev/null -s | grep "X-Foo"  # Outputs:  # X-Foo: bar  

But, this is where it fails, also for /baz URL:

curl -D - https://example.com/baz -o /dev/null -s | grep "X-Foo"  # Outputs:  # X-Foo: bar  

As if <Location "/"> works but <Location "/baz"> does not match.

I also tried:

<Location "/">      Header set X-Foo "bar"  </Location>  <Location "/">      Header unset X-Foo  </Location>  

And header X-Foo is never returned, so Header unset does works.

err: CONNECTION_RESET in Chrome on form post, but only from outside network

Posted: 08 Jul 2022 08:00 AM PDT

I started getting the following error when sending multi-part form posted data via ajax to a service on my website:

net::ERR_CONNECTION_RESET

This only happens when a certain combination of characters are used in the form posted data, and the issue does not happen when I'm accessing the site directly from the server itself.

I'm basically having trouble tracking down the culprit, such as request filter, firewall, etc. I just don't know where to look, and I can't find anything in the logs that relates to it. It seems a combination of parenthesis are causing the issue, but I have no idea why.

I suppose the question is, does anyone have any ideas as to where I can start tracking down this issue?

I know this is fairly vague, but I just need a push in the right direction.

Thanks a ton.

IIS 8.5, Windows Server 2012

Install Apache Derby on CentOS 6.4

Posted: 08 Jul 2022 05:07 AM PDT

I'm not a Java developer, my knowledge is very limited to this topic. I want to deploy a .war webservice on my Tomcat server, which fails most likely because of its Derby dependency (just a guess from catalina.log). So I'm now struggling with Derby installation. First of all I've been said, it's included in the newest JDK, but there was no such package available for download. This version is all I could get from yum:

java version "1.7.0_65"  OpenJDK Runtime Environment (rhel-2.5.1.2.el6_5-x86_64 u65-b17)  OpenJDK 64-Bit Server VM (build 24.65-b04, mixed mode)  

Stupid question, but is this the JDK or just the JRE? I downloaded and extracted Derby manually, I want Tomcat to know that Derby exists. How do I do this? Hope, I haven't completely misunderstood things.

java org.apache.derby.tools.sysinfo  

returns Error: Could not find or load main class org.apache.derby.tools.sysinfo

Thanks for any advice.

Linux Bridges - Tags on interface vs. on bridge

Posted: 08 Jul 2022 07:04 AM PDT

What is the difference between tagging on the bridge vs a physical interface then adding that tagged interface onto the bridge?

I'm assuming if I want two total VLANs segmenting network traffic to guests on KVM. I don't want the guests seeing the tag, I just want the VLANs to segregate traffic through the interfaces. Meaning the guests should not see the tags (unless packets are double-tagged).

Each scenario would look as follows:

Tagging on interface:
em1 -> em1.3 -> br0 -> vnet0 -> em1
em1 -> em1.4 -> br1 -> vnet1 -> em2

Tagging on bridge:
em1 -> br0 -> br0.3 -> vnet0 -> em1
em1 -> br0 -> br0.4 -> vnet1 -> em2

Is net effect the same? Or is there some functional difference I'm missing here?

EDIT: I've been reading (http://blog.davidvassallo.me/2012/05/05/kvm-brctl-in-linux-bringing-vlans-to-the-guests/), and it seems like tagging on physical interfaces (em1.3) causes linux to strip the tag prior to it sending it off to the bridge. Whereas tagging on the bridge just passes the tagged traffic through. True? If not, where is the tagged stripped/added?

How to downgrade OpenSSL on Amazon EC2

Posted: 08 Jul 2022 08:00 AM PDT

I'm using a version of the Amazon Linux AMI (v 2013.03) that comes with OpenSSL 1.0.1 installed as described here: http://aws.amazon.com/amazon-linux-ami/2013.03-release-notes/.

I have an application that may not be compatible with that version of OpenSSL, and I'd like to "downgrade" that to version to 0.9.8. I can install that version with the following:

sudo yum install openssl098e  

But I am unable to uninstall the 1.0.1 version. When I try:

sudo yum erase openssl  

I get a long list of what seems to be dependency processing and a result of:

Error: Trying to remove "yum", which is protected

Is there a way for me to remove the newer version of OpenSSL?

Which permissions/rights does a user need to have WMI access on remote machines?

Posted: 08 Jul 2022 05:06 AM PDT

I'm writing a monitoring service that uses WMI to get information from remote machines. Having local admin rights on all these machines is not possible for political reasons.

Is this possible? What permissions/rights does my user require for this?

No comments:

Post a Comment