How to properly migrate user's email profile from local e-mail server to m365 Posted: 28 May 2022 02:28 PM PDT My task was to migrate our HR manager's email account from our local e-mail server and keep all her folders, sub folders mails contacts etc. as they were. I have managed to redirect the mailflow to her new email address but when I was trying to import her old data (pst file exported from an ost file from her previous profile in Outlook 2019) something like "~not enough system resources are available to finish the process". I saw that her directoty structure had been already created, but the new ost file was significally smaller than the original. I gave it another shot and that was when things went south. I ticked the skip existing data button than proceeded with the importation again. It was completed with no errors whatsoever, but the new file is way bigger than the original now because it had duplicated every email with any attachement. Make things worse syncing with m365 was already turned on. Is there any way just to save the messages she had received since the redirection, but delete completely the new ost file, stop syncing and import her old profile back again, and add manually those few new mails and then turn syncing on in a way that Outlook on her notebook profile will overwrite her data in the cloud and not vica-versa? |
Questions about NGINX config after upgrade, TLSv1.2 TLSv1.3, etc. proxy config, getting SSL_do_handshake() failed, failed Posted: 28 May 2022 11:51 AM PDT I recently upgraded a DOCKER container running NGINX to use the NGINX repo at nginx.org instead of the Debian distro version, and for security purposes we are making some adjustments to the server config regarding SSL/TLS, etc. When I scan one of the server domains with a Qualsys ? scan it actually gets an A, but fails for some of the older browsers. Looking at the Log file, I am actually seeing errors mostly like this: 2022/05/28 02:41:14 [info] 24#24: *12871 SSL_do_handshake() failed (SSL: error:1417A0C1:SSL routines:tls_post_process_client_hello:no shared cipher) while SSL handshaking, client: 192.168.1.1, server: 0.0.0.0:443 2022/05/28 02:41:28 [info] 24#24: *12874 SSL_do_handshake() failed (SSL: error:14209102:SSL routines:tls_early_post_process_client_hello:unsupported protocol) while SSL handshaking, client: 192.168.1.1, server: 0.0.0.0:443 2022/05/28 02:41:38 [info] 24#24: *12877 SSL_do_handshake() failed (SSL: error:142090C1:SSL routines:tls_early_post_process_client_hello:no shared cipher) while SSL handshaking, client: 192.168.1.1, server: 0.0.0.0:443 2022/05/28 02:41:39 [crit] 24#24: *12878 SSL_do_handshake() failed (SSL: error:141CF06C:SSL routines:tls_parse_ctos_key_share:bad key share) while SSL handshaking, client: 192.168.1.1, server: 0.0.0.0:443 2022/05/28 02:41:43 [info] 24#24: *12869 peer closed connection in SSL handshake (104: Connection reset by peer) while SSL handshaking, client: 192.168.1.1, server: 0.0.0.0:443 2022/05/28 02:41:43 [info] 24#24: *12872 peer closed connection in SSL handshake (104: Connection reset by peer) while SSL handshaking, client: 192.168.1.1, server: 0.0.0.0:443 In the HTTP block, not in the server blocks, I have something like this, and I tried a few different setting for ciphers, etc. I am not an expert, but I presume those errors are related somewhat to that configuration. I actually am not seeing any errors on my dev server, just in production at this point, but the server seems to be still working, except for a 502 error a couple of days ago. ssl_protocols TLSv1.2 TLSv1.3; ssl_prefer_server_ciphers on; ssl_ciphers ECDH+AESGCM:ECDH+AES256-CBC:ECDH+AES128-CBC:DH+3DES:!ADH:!AECDH:!MD5:!kDHE; # !kDHE disables disable DHE key exchange # ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384; There are a few other errors that might just be warnings, e.g. 2022/05/28 13:43:36 [warn] 24#24: *13173 an upstream response is buffered to a temporary file /var/cache/nginx/fastcgi_temp/4/00/0000000004 while reading upstream Just looking for some pointers on how to resolve that issue. The NGINX version is 1.22.0 built directly from nginx.org. |
Intermittent 503 timeouts on website due to PHP FPM "Attempt to connect to Unix domain socket failed" Posted: 28 May 2022 10:09 AM PDT I have been experiencing intermittent site outages showing a 503 error. The site normally runs just fine. I've pinpointed it to the following error that gets spammed in my apache logs: FCGI: attempt to connect to Unix domain socket /run/php/php7.3-fpm.sock (*) failed When I run service php7.3-fpm restart , the site comes back up and everything runs fine. When I run ps aux | grep php-fpm I get something like this: root 20772 0.0 0.0 1152852 33600 ? Ss 12:42 0:00 php-fpm: master process (/etc/php/7.3/fpm/php-fpm.conf) www-data 20773 10.6 0.1 1285072 250308 ? S 12:42 2:34 php-fpm: pool www www-data 20774 9.8 0.1 1271452 244952 ? S 12:42 2:22 php-fpm: pool www www-data 20775 9.8 0.1 1260964 219920 ? S 12:42 2:22 php-fpm: pool www // ... etc Also, here is my permissions list in /run/php : -rw-r--r-- 1 root root 5 May 28 12:42 php7.3-fpm.pid srw-rw---- 1 www-data www-data 0 May 28 12:42 php7.3-fpm.sock I noticed that the master process is running as root. Is it supposed to be running as root? Could this be the issue? |
SMB connection speed over VPN Posted: 28 May 2022 09:18 AM PDT I'm seeking for some advices on how to improve connection speed while using SMB shares over VPN. I have two Win10 PCs with following setup. PC1 - 500Mb/s DL and 350 MB/s UL per SpeedTest PC2 - 100Mb/s DL and 30 MB/s UL per SpeedTest Both PCs are quite powerful an definitely not CPU constrained. PC1 hosts SoftEther server and VPN bridged to it's local network. PC2 connects to PC1 via SoftEther client. Upon VPN connection PC2 has following speed checks - 40Mb/s DL and 1 Mb/s UL per speedtest. Location is determined to be the same as location of PC1. But sharing any files over windows shares barely can reach 200Kb/s speed. My main use case was to combine both PCs into VPN network for resources access. PS: I've tried hosting FTP server on PC1, but for some reason FileZilla was cutting done every connection speed to ~100kb/s with limit of total around 300-400 kb/s. Does anyone know what can be done in order to improve connection speeds? |
Channeling IPv4 packets to IPv6 packets between interfaces Posted: 28 May 2022 09:29 AM PDT How do I route packets in my machine between interfaces. Eth0 receives IPv4 packets and I will like to channel it to Eth1 which will send IPv6 packets out. |
NTP traffic causes frequent ARP requests Posted: 28 May 2022 01:14 PM PDT I am running Ubuntu LTS 22.04 using Chrony as NTP server. I discovered that even with frequent NTP traffic between a NTP client and the NTP Server, the ARP requests are still being sent back and forth very frequently. By default, ARP cache expires 60 seconds. Is it a bug? 09:32:28.116858 IP 10.68.1.2.123 > 10.68.1.1.123: NTPv4, Client, length 48 09:32:28.117032 IP 10.68.1.1.123 > 10.68.1.2.123: NTPv4, Server, length 48 09:32:30.117770 IP 10.68.1.2.123 > 10.68.1.1.123: NTPv4, Client, length 48 09:32:30.117936 IP 10.68.1.1.123 > 10.68.1.2.123: NTPv4, Server, length 48 09:32:33.116704 ARP, Request who-has 10.68.1.1 tell 10.68.1.2, length 46 09:32:33.116750 ARP, Reply 10.68.1.1 is-at 20:7c:14:a0:b9:a1, length 28 09:32:33.190181 ARP, Request who-has 10.68.1.2 tell 10.68.1.1, length 28 09:32:33.190327 ARP, Reply 10.68.1.2 is-at 00:90:e8:9d:aa:dc, length 46 09:32:46.117215 IP 10.68.1.2.123 > 10.68.1.1.123: NTPv4, Client, length 48 09:32:46.117470 IP 10.68.1.1.123 > 10.68.1.2.123: NTPv4, Server, length 48 09:32:48.117032 IP 10.68.1.2.123 > 10.68.1.1.123: NTPv4, Client, length 48 09:32:48.117277 IP 10.68.1.1.123 > 10.68.1.2.123: NTPv4, Server, length 48 09:33:04.116931 IP 10.68.1.2.123 > 10.68.1.1.123: NTPv4, Client, length 48 09:33:04.117104 IP 10.68.1.1.123 > 10.68.1.2.123: NTPv4, Server, length 48 09:33:06.116888 IP 10.68.1.2.123 > 10.68.1.1.123: NTPv4, Client, length 48 09:33:06.117144 IP 10.68.1.1.123 > 10.68.1.2.123: NTPv4, Server, length 48 09:33:09.286195 ARP, Request who-has 10.68.1.2 tell 10.68.1.1, length 28 09:33:09.286332 ARP, Reply 10.68.1.2 is-at 00:90:e8:9d:aa:dc, length 46 09:33:22.116699 IP 10.68.1.2.123 > 10.68.1.1.123: NTPv4, Client, length 48 09:33:22.116833 IP 10.68.1.1.123 > 10.68.1.2.123: NTPv4, Server, length 48 09:33:24.116869 IP 10.68.1.2.123 > 10.68.1.1.123: NTPv4, Client, length 48 09:33:24.117034 IP 10.68.1.1.123 > 10.68.1.2.123: NTPv4, Server, length 48 09:33:27.116688 ARP, Request who-has 10.68.1.1 tell 10.68.1.2, length 46 09:33:27.116751 ARP, Reply 10.68.1.1 is-at 20:7c:14:a0:b9:a1, length 28 09:33:40.116842 IP 10.68.1.2.123 > 10.68.1.1.123: NTPv4, Client, length 48 09:33:40.117011 IP 10.68.1.1.123 > 10.68.1.2.123: NTPv4, Server, length 48 09:33:42.116923 IP 10.68.1.2.123 > 10.68.1.1.123: NTPv4, Client, length 48 09:33:42.117089 IP 10.68.1.1.123 > 10.68.1.2.123: NTPv4, Server, length 48 09:33:45.126169 ARP, Request who-has 10.68.1.2 tell 10.68.1.1, length 28 09:33:45.126332 ARP, Reply 10.68.1.2 is-at 00:90:e8:9d:aa:dc, length 46 09:33:58.116928 IP 10.68.1.2.123 > 10.68.1.1.123: NTPv4, Client, length 48 09:33:58.117095 IP 10.68.1.1.123 > 10.68.1.2.123: NTPv4, Server, length 48 09:34:00.116873 IP 10.68.1.2.123 > 10.68.1.1.123: NTPv4, Client, length 48 09:34:00.117039 IP 10.68.1.1.123 > 10.68.1.2.123: NTPv4, Server, length 48 09:34:16.116895 IP 10.68.1.2.123 > 10.68.1.1.123: NTPv4, Client, length 48 09:34:16.117154 IP 10.68.1.1.123 > 10.68.1.2.123: NTPv4, Server, length 48 09:34:18.116863 IP 10.68.1.2.123 > 10.68.1.1.123: NTPv4, Client, length 48 09:34:18.117029 IP 10.68.1.1.123 > 10.68.1.2.123: NTPv4, Server, length 48 09:34:21.116733 ARP, Request who-has 10.68.1.1 tell 10.68.1.2, length 46 09:34:21.116768 ARP, Reply 10.68.1.1 is-at 20:7c:14:a0:b9:a1, length 28 09:34:21.222128 ARP, Request who-has 10.68.1.2 tell 10.68.1.1, length 28 09:34:21.222294 ARP, Reply 10.68.1.2 is-at 00:90:e8:9d:aa:dc, length 46 09:34:34.116899 IP 10.68.1.2.123 > 10.68.1.1.123: NTPv4, Client, length 48 09:34:34.117069 IP 10.68.1.1.123 > 10.68.1.2.123: NTPv4, Server, length 48 09:34:36.127025 IP 10.68.1.2.123 > 10.68.1.1.123: NTPv4, Client, length 48 09:34:36.127269 IP 10.68.1.1.123 > 10.68.1.2.123: NTPv4, Server, length 48 09:34:52.116889 IP 10.68.1.2.123 > 10.68.1.1.123: NTPv4, Client, length 48 09:34:52.117145 IP 10.68.1.1.123 > 10.68.1.2.123: NTPv4, Server, length 48 09:34:54.116943 IP 10.68.1.2.123 > 10.68.1.1.123: NTPv4, Client, length 48 09:34:54.117187 IP 10.68.1.1.123 > 10.68.1.2.123: NTPv4, Server, length 48 09:34:57.318148 ARP, Request who-has 10.68.1.2 tell 10.68.1.1, length 28 09:34:57.318299 ARP, Reply 10.68.1.2 is-at 00:90:e8:9d:aa:dc, length 46 09:35:10.116983 IP 10.68.1.2.123 > 10.68.1.1.123: NTPv4, Client, length 48 09:35:10.117159 IP 10.68.1.1.123 > 10.68.1.2.123: NTPv4, Server, length 48 09:35:12.116865 IP 10.68.1.2.123 > 10.68.1.1.123: NTPv4, Client, length 48 09:35:12.117031 IP 10.68.1.1.123 > 10.68.1.2.123: NTPv4, Server, length 48 09:35:15.116750 ARP, Request who-has 10.68.1.1 tell 10.68.1.2, length 46 09:35:15.116810 ARP, Reply 10.68.1.1 is-at 20:7c:14:a0:b9:a1, length 28 |
Load vars based on env in ansible Posted: 28 May 2022 09:58 AM PDT Team, I have two vars that map to two environments. I want to use them in playbook but only one value should be applied based on env playbook runs on. ex: var = test1 > should be loaded when env1 var = test2 > should be loaded when env2 any hint how can i achieve this in ansible? I want to write my task in such a way that this variables var carries test1 value when it is run on env1 and vice versa. is there a login I can use at task level? my task is below and when am running my playbook on clusterA it should use var=test1 and when running on clusterB it should use var=test2 - name: Add persistent ddn volume mount: path: "{{ lustre_client_path }}" src: "{{ var }}" fstype: lustre state: mounted |
My IIS URL Rewrite rule is not working for my IIS website Posted: 28 May 2022 10:19 AM PDT I have an IIS Server on Windows Server 2016 where I host one website. The site is using bindings so that both of these domains, astro.resources.teams.org and astronomyteams.org , goto to the same website on both http and https. But that means the user will see either astro.resources.teams.org or astronomyteams.org in the browser depending on their bookmark or what domain they type into the browser. So now I want both domains, to show as https://astronomyteams.org/ I installed the URL Rewrite module in IIS and added this rule: <rule name="CanonicalHostNameRule" patternSyntax="ECMAScript" stopProcessing="true"> <match url="^(.*)$" /> <conditions> <add input="{HTTP_HOST}" pattern="^astro.resources.teams.org$" /> </conditions> <action type="Redirect" url="https://astronomyteams.org/{R:1}" /> </rule> But now, a certain large percentage of users getting a not found message when using their old https://astro.resources.teams.org/ bookmarks. If they type in the astronomyteams.org it works, but I am trying to make it as painless as possible for the end user and I am hoping IIS can take care of this for me. Is there anything else I need to do to make this work? |
HAProxy goes to the same website even though they have different sub-domains Posted: 28 May 2022 10:05 AM PDT I have an issue with HAProxy where it goes to the same website even though they have different sub-domains. For example, I go to foo.domain.com then on another tab I go to bar.domain.com and another tab for baz.domain.com, all three loads the foo.domain.com website and when I hard refresh the other sites it goes properly to the proper website then it happens again making the new website the face of all domains unless I keep refreshing the websites. I have the following configuration: defaults log global mode http option tcplog option dontlognull retries 3 option redispatch maxconn 30000 timeout connect 10s timeout client 60s timeout server 60s frontend http_in mode http option httplog bind *:80 option forwardfor acl host_foo hdr(host) -i foo.domain.com acl host_bar hdr(host) -i bar.domain.com acl host_baz hdr(host) -i baz.domain.com use_backend http_foo if host_foo use_backend http_bar if host_bar use_backend http_baz if host_baz backend http_foo mode http option httplog option forwardfor server foo foo:80 backend http_bar mode http option httplog option forwardfor server bar bar:80 backend http_baz mode http option httplog option forwardfor server baz baz:80 frontend https_in mode tcp option tcplog bind *:443 acl tls req.ssl_hello_type 1 tcp-request inspect-delay 5s tcp-request content accept if tls acl host_foo req.ssl_sni -i foo.domain.com acl host_bar req.ssl_sni -i bar.domain.com acl host_baz req.ssl_sni -i baz.domain.com use_backend https_foo if host_foo use_backend https_bar if host_bar use_backend https_baz if host_baz backend https_foo mode tcp option tcplog option ssl-hello-chk server foo foo:443 backend https_bar mode tcp option tcplog option ssl-hello-chk server bar bar:443 backend https_baz mode tcp option tcplog option ssl-hello-chk server baz baz:443 I'm using HAProxy version 2.4.12. Is there anything to do to prevent this from happening? |
Filesystem in KVM-QEMU VM switching to read-only mode although smartctl is not reporting any errors Posted: 28 May 2022 11:43 AM PDT I have a Debian 10 Host running a Debian 10 virtual machine with QEMU / KVM. In the past few days, I got the "Error: Read-only file system" twice, after the virtual machine was running some intensive tasks for multiple hours. I rebooted the virtual machine each time. It was telling me at boot that a manual fsck is required. I used the command fsck -yf /dev/sda1 to fix the problem. It worked each time, still, I want to find out what the issue is and to prevent it in the future. I used gsmartcontrol to perform an extended check on the main drive (on the host). No errors were reported. I also used sudo smartctl -data -A /dev/sda , which reported: smartctl 6.6 2017-11-05 r4594 [x86_64-linux-5.10.0-0.bpo.9-amd64] (local build) Copyright (C) 2002-17, Bruce Allen, Christian Franke, www.smartmontools.org === START OF READ SMART DATA SECTION === SMART Attributes Data Structure revision number: 1 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 5 Reallocated_Sector_Ct 0x0033 100 100 010 Pre-fail Always - 0 9 Power_On_Hours 0x0032 096 096 000 Old_age Always - 19026 12 Power_Cycle_Count 0x0032 099 099 000 Old_age Always - 238 177 Wear_Leveling_Count 0x0013 091 091 000 Pre-fail Always - 126 179 Used_Rsvd_Blk_Cnt_Tot 0x0013 100 100 010 Pre-fail Always - 0 181 Program_Fail_Cnt_Total 0x0032 100 100 010 Old_age Always - 0 182 Erase_Fail_Count_Total 0x0032 100 100 010 Old_age Always - 0 183 Runtime_Bad_Block 0x0013 100 100 010 Pre-fail Always - 0 187 Reported_Uncorrect 0x0032 100 100 000 Old_age Always - 0 190 Airflow_Temperature_Cel 0x0032 058 037 000 Old_age Always - 42 195 Hardware_ECC_Recovered 0x001a 200 200 000 Old_age Always - 0 199 UDMA_CRC_Error_Count 0x003e 099 099 000 Old_age Always - 1 235 Unknown_Attribute 0x0012 099 099 000 Old_age Always - 88 241 Total_LBAs_Written 0x0032 099 099 000 Old_age Always - 76536051093 which looks ok to me, wear still being above 90%. So it doesn't look like a hardware issue to me. What else can it be that makes my disk to keep failing? My VM also has an LVM partition mounted via 9p , connected via the Default: Mapped driver in KVM. However, the VM is located on the main host drive. I also tried to write a new file to the mounted LVM partition after the read-only filesystem error occurred, which worked without an error. So I also don't think this partition is related to the error, to be honest. Any idea what it could be? |
OPNsense WAN failover causes disruption when non-active WAN is down Posted: 28 May 2022 04:04 PM PDT I have the latest version of OPNsense set up in a VM on ESXi 7. OPNsense is very similar to pfSense, and I suspect the solution would apply to both. All the NICs are PCI passthrough devices: - A management interface
- WAN 1, my preferred WAN to be used all the time unless WAN 1 is failed
- WAN 2, my fallback WAN to be used all the time only when WAN 1 is failed
- A LAN interface through which all LAN clients connect to the internet
WAN 1 and WAN 2 are set up as gateways. WAN 1, being the connection I want to use all the time unless it's down or has high loss, has a priority of 254, and WAN 2 is 255, thus preferring WAN 1. There are no other enabled gateways. Both are upstream gateways. There are no ping target IPs so instead they ping their respective gateway IPs as provided by the ISPs. I then made a gateway group with both the WANs as members. Tier 1 has WAN 1 only, and Tier 2 has WAN 2 only. The "Trigger Level" is "Packet Loss or High Latency." My understanding is having two tiers like this means Tier 1 is used 100% of the time unless it fails, then Tier 2 is used 100% of the time until Tier 1 comes back. Under normal circumstances with both WANs up and reporting no loss, this works fine. All traffic seems to be sent through WAN 1 and WAN 2 is ignored, aside from health checks and the firewall blocking random script kiddies probing it. However, when WAN 2 starts to report packet loss and OPNsense considers it to be down, which happens usually a handful of times per day for a few minutes at a time, it is disruptive despite WAN 1 being healthy. Some devices that try to get to the internet suddenly can't until WAN 2 is healthy again. Alternatively, if I just disable the WAN 2 gateway entirely, the connection is stable, presuming of course WAN 1 also stays up. When this happens, OPNsense is still saying WAN 1 is healthy. The logs suggest that the routes aren't changing at all from WAN 1 when WAN 2 fails, which makes this more confusing. Here are log entries around the time of a WAN 2 failure: 2021-03-10T10:31:44 kernel pflog0: promiscuous mode disabled 2021-03-10T10:31:44 opnsense[38519] /usr/local/etc/rc.filter_configure: ROUTING: keeping current default gateway '(WAN 1's IP)' 2021-03-10T10:31:44 opnsense[41442] /system_gateways.php: The LAN_DHCP monitor address is empty, skipping. 2021-03-10T10:31:44 opnsense[41442] /system_gateways.php: Choose to bind WAN 1 on (WAN 1's IP) since we could not find a proper match. 2021-03-10T10:31:44 opnsense[41442] plugins_configure monitor (execute task : dpinger_configure_do()) 2021-03-10T10:31:44 opnsense[41442] plugins_configure monitor () 2021-03-10T10:31:44 opnsense[41442] /system_gateways.php: ROUTING: keeping current default gateway '(WAN 1's IP)' 2021-03-10T10:31:44 opnsense[41442] /system_gateways.php: ROUTING: setting IPv4 default route to (WAN 1's IP) 2021-03-10T10:31:44 opnsense[41442] /system_gateways.php: ROUTING: IPv4 default gateway set to opt2 2021-03-10T10:31:44 opnsense[41442] /system_gateways.php: ROUTING: entering configure using defaults 2021-03-10T10:31:43 configctl[59484] event @ 1615397503.05 exec: system event config_changed 2021-03-10T10:31:43 configctl[59484] event @ 1615397503.05 msg: Mar 10 10:31:43 OPNsense.localdomain config[41442]: config-event: new_config /conf/backup/config-1615397503.052.xml And here are gateway log entries around the same time: 2021-03-10T10:31:03 dpinger[59862] GATEWAY ALARM: WAN 2 (Addr: ******* Alarm: 1 RTT: 8702us RTTd: 7796us Loss: 22%) 2021-03-10T10:31:03 dpinger[77013] WAN 2 *******: Alarm latency 8702us stddev 7796us loss 22% 2021-03-10T10:30:05 dpinger[79106] GATEWAY ALARM: WAN 2 (Addr: ******* Alarm: 0 RTT: 9164us RTTd: 14634us Loss: 11%) The most confusing entry to me is keeping current default gateway which to me explicitly is saying "WAN 2 failed but I don't care because WAN 1 is the default gateway", but maybe I'm misinterpreting it. So, in this dual WAN failover scenario, why does the failover of the second, supposedly inactive WAN cause the active WAN to be unusable? |
Istio : HTTPS Traffic converted to HTTP with port set as 443 Posted: 28 May 2022 09:03 AM PDT Bug description We have setup an istio over on eks cluster & a java app is hosted in it. The pod has been created along with service with type ClusterIP We have created Virtual Service, Gateway & set the istio ingress gateway as a NodePort. In front of the istio ingress gateway, we placed the AWS Application Load Balancer. We created a route53(DNS) entry which points to the above said ALB Now, all the services & pods are UP Now, on hitting the DNS with https, the request info we get in the above said Spring Boot Java Application, has been changed from https to http with port 443 For Eg: if I make a curl request to say curl https://sarath.somedomain.com/helloworld , in our java app, while we get the request info we get them as http://sarath.somedomain.com:443/helloworld Also we have printed the headers like x-forwarded-proto, x-forwarded-port in our Java App. x-forwarded-proto value we get in the java app is http x-forwarded-port value we get in the java app is 443 x-forwarded-for value we get in the java app is null Our key suspicion is with envoy which does this protocol conversion from https to http Expected behavior If I hit say curl https://sarath.somedomain.com/helloworld , in our java app we should get the request info as https://sarath.somedomain.com/helloworld Steps to reproduce the bug Gateway: apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: sarath-gateway spec: selector: istio: ingressgateway # use Istio default gateway implementation servers: - port: number: 80 name: http protocol: HTTP hosts: - "sarath.somedomain.com" - "prasath.somedomain.com" Virtual Service: apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: sp-vs spec: hosts: - "sarath.somedomain.com" gateways: - sarath-gateway http: - match: - uri: prefix: / route: - destination: port: number: 80 host: javaapp-svc ALB Ingress: apiVersion: extensions/v1beta1 kind: Ingress metadata: name: sarathingress namespace: istio-system annotations: kubernetes.io/ingress.class: alb alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]' alb.ingress.kubernetes.io/subnets: subnet-XXXXXXXX,subnet-XXXXXXXX alb.ingress.kubernetes.io/security-groups: sg-XXXXXXXX alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-1:XXXXXXXX:certificate/XXXXXXXX spec: rules: - host: sarath.somedomain.com http: paths: - path: /* backend: serviceName: istio-ingressgateway servicePort: 80 App Service: apiVersion: v1 kind: Service metadata: name: javaapp-svc labels: app: javaapp-svc name: javaapp-svc spec: selector: app: javaapp-svc ports: - name: http port: 80 targetPort: 9090 protocol: TCP Version (include the output of istioctl version --remote and kubectl version ) Output of istioctl version --remote : client version: 1.2.5 citadel version: 1.2.5 galley version: 1.2.5 ingressgateway version: 1.2.5 pilot version: 1.2.5 policy version: 1.2.5 sidecar-injector version: 1.2.5 telemetry version: 1.2.5 Output of kubectl version : Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.7", GitCommit:"6f482974b76db3f1e0f5d24605a9d1d38fad9a2b", GitTreeState:"clean", BuildDate:"2019-03-29T16:15:10Z", GoVersion:"go1.10.8", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"13+", GitVersion:"v1.13.10-eks-5ac0f1", GitCommit:"5ac0f1d9ab2c254ea2b0ce3534fd72932094c6e1", GitTreeState:"clean", BuildDate:"2019-08-20T22:39:46Z", GoVersion:"go1.11.13", Compiler:"gc", Platform:"linux/amd64"} How was Istio installed? Istio is installed through helm, below is the command used: helm install install/kubernetes/helm/istio-init --name istio-init --namespace istio-system helm template \ --set gateways.istio-ingressgateway.type=NodePort \ --set prometheus.enabled=true \ --set grafana.enabled=true \ --set tracing.enabled=true \ --set kiali.enabled=true \ --set "kiali.dashboard.jaegerURL=http://jaeger-query:16686" \ --set "kiali.dashboard.grafanaURL=http://grafana:3000" \ install/kubernetes/helm/istio \ --name istio --namespace istio-system > $HOME/istio.yaml Environment where bug was observed (cloud vendor, OS, etc) AWS, EC2 Machine, Amazon Linux v2 |
Site to Site VPN devices behind ISP modem Posted: 28 May 2022 04:04 PM PDT site to site vpn setup when behind isp wifi router image I've read through many site to site vpn posts on this site but I still haven't found a solution to my issue. Here is the situation. I was asked to connect two sites together so the remote site B's users can print to printers and access the file shares at site A. Both sites are using comcast business modems, and have been for quite some time so turning the ISP modems to bridge mode is an absolute last resort option. I'm not entirely clear on how to go about setting up the vpn router to allow traffic from the xfinity business modems to see each other over the site to site vpn connection. Both sites connect, but I can't ping either one. Here's what I've done so far: 1. at both the remote site and the main site I've gone in to the isp cable modem and changed the local ip ranges so they are different, site A local up range is 10.1.10.X and site b is range is 10.20.30.X I've gone in to the tp-link interfaces and setup WAN 1 to a static IP on the local subnet for each site. Tp link at site A wan1 is set to 10.1.10.215 and site b is 10.20.30.233 On both comcast modems I've setup port fowarding for the respective static ips of the tp-link devices so that ports 4500 and 500 UDP are open I've gone in the admin panel of the tplink r600vpn routers and setup the site to site ipsec and connected both sites together. Both sites have static internet/public ip addresses which are placed in to the proper fields. I'm using WAN1 as the adapter in the settings. This is where things fall apart for me. I can see that the vpn connection is working from the control panel on the tp-link routers, but I cannot ping the local IP addresses on either site from computers connected to the cable modems. Pings to 10.1.10.X network from site B time out, and vice versa. I've read about static routing, but I don't understand if I need to set this up on just the tp-link routers, or if I need to set this up on the comcast modem router or both. I tried messing with this already but it didn't work either. I tried making the static routing destination for the tp-link at site b as 10.1.10.0 with hop 1 at 10.20.30.1 on WAN1 as the interface but that didn't work. Is there a firewall rule I need to look for that might prevent this from working? I'm also having a hard time understanding which physical network ports need to be connected. Right now I have the tp-r600vpn routers connected via the WAN1 port only. Do I need to instead have this device connected on one of the 4 lan ports instead? I'll keep googling for answers, but I've yet to find one that explains this type of setup and figured I would ask. |
How to export Cisco AnyConnect preferences and certificates to another PC Posted: 28 May 2022 10:04 AM PDT I have two computers (PC and MAC) connected to different organization VPNs. I want to be able to connect from MAC to the same VPNs set on PC. So I need to export VPN list and certificates, etc to my MAC. Is there a way to do it or what do I need to copy manually? Assume everything from: %ProgramData%\Cisco\Cisco AnyConnect Secure Mobility Client\Profile to: /opt/cisco/anyconnect/profile But how do I properly merge it then? Assume I need to export certificates properly |
AWS system manager : Verify that the IAM instance profile attached to the instance includes the required permissions Posted: 28 May 2022 03:04 PM PDT I am trying to access an ec2 instance using AWS systems manager for that I've created a role attached to the following policies. AmazonEC2RoleforSSM AmazonSSMAutomationApproverAccess AmazonSSMFullAccess AmazonSSMAutomationRole And the role is attached to the ec2 instances. The ec2 instance is listed in the session manager ec2 instance list however when I try to connect I am getting the following error the version of SSM Agent on the instance supports Session Manager, but the instance is not configured for use with AWS Systems Manager. Verify that the IAM instance profile attached to the instance includes the required permissions Tried the troubleshooting methods but still getting the following error and one more thing even I removed the attached role the ec2 instance still showing up in the session manager instance list |
App Engine Flex restart signal Posted: 28 May 2022 01:04 PM PDT I have a long running process in Go on an App Engine flex instance, deployed via docker image. Most of the time when I deploy to the live version it sends a SIGTERM to the app. I can catch this and do a graceful shutdown. It's great. Other times, the process just seems to disappear and a new instance is created. I don't get any log output; no indication of what happened. This definitely seems to happen if I change the number of instances (via manual_scaling) but sometimes it happens on a normal deploy. Is there a way to get a SIGTERM consistently? Are there other strategies I can use to know when the instance is being killed/restarted? Update: I tried a few test cases: - "Delete" instance in App Engine UI. The instance cleanly shut down - sending signals - and rebooted since it's configured to have one instance.
- Deploy, changing from 1 to 2 instances. Existing instance rebooted cleanly with signals. New instance came up.
- Deploy, changing from 2 to 1 instances. One existing instance rebooted cleanly with signals. The other one went poof for lack of a better description. Viewing 'All logs' showing STDERR from my app, then nothing. No output in vm.events, vm.syslog, vm.shutdown logs which report lots of interesting stuff during reboot. I also know that signals weren't received by my app because the database is left in a dirty state.
It's this last case that I'd love some more insight into, thanks! Please also let me know if there's a better place or way to ask this question. |
Error in job agent when running a package in SSISDB catalog (SQL Server) Posted: 28 May 2022 03:04 PM PDT I created a simple package in visual studio. Then I deployed this package in SSISDB catalog which is on a different server. I am using Job agent to execute this package with a proxy account. But I am getting this error in Job Agent: "Excuted as user: *\test**. Microsoft (R) SQL Server execute Package Utility Version 13.0.5153.0 for 64-bit Copyright 2016 Microsoft. All rights reserved. Failed to execute IS server package because of error 0x80131904. Server: ****, Package path: ****.dtsx , Environment reference Id: NULL. Description: Login failed for user: '*\test'**. Source: .Net SqlClient Data Provider. .... The package execution failed. The step failed." Kindly help me with identifying this issue. |
Group policy not updating on Windows Server 2012 R2 Posted: 28 May 2022 10:04 AM PDT I have a small solution for one of our customers. There are 3 servers, a Domain Controller, Session Host Server and a Sage Server. Just recently we've noticed that group policies are no longer applying to users logging in to the Session Host server. As such, profile redirection isn't occuring which is causing a whole host of other issues. If I run gpupdate /force I get the following response... The processing of Group Policy failed. Windows could not authenticate to the Active Directory service on a domain controller. (LDAP Bind function call failed). Look in the details tab for error code and description. A quick look in the system error logs shows me kerberos errors all over the place... The event 1006 I can see seems to show 'Invalid Credentials'... Although I'm unsure what credentials it's complaining about. I've checked with my colleagues and nobody is owning up to having changed anything, so i'm stumped as to what has caused this. Can anyone point me in the right direction? |
Google Authenticator FreeRADIUS Posted: 28 May 2022 05:02 PM PDT I'm trying to implement google authenticator PAM module in FreeRADIUS on RHEL7. I've loosely followed this guide: http://www.supertechguy.com/help/security/freeradius-google-auth The reason I say loosely is things appear to have changed with the google module recently so I have built it by performing: sudo yum -y install freeradius freeradius-utils git gcc pam-devel qrencode qrencode-libs qrencode-devel git autoconf automake libtool freeradius-mysql git clone https://github.com/google/google-authenticator-libpam Cd google-authenticator-libpam sudo ./bootstrap.sh ./configure make sudo make install This all worked ok. I've then configured the configuration files using the guide. My /etc/pam.d/radiusd file looks like this: #%PAM-1.0 #auth include password-auth #account required pam_nologin.so #account include password-auth #password include password-auth #session include password-auth auth requisite pam_google_authenticator.so forward_pass #auth required pam_unix.so use_first_pass account required pam_permit.so session required pam_permit.so Everything runs ok but when I attempt to test it I get rejected everytime. I've ran radiusd -X and get the following output when attempting to connect: Received Access-Request Id 168 from 127.0.0.1:48534 to 127.0.0.1:1812 length 77 User-Name = 'username' User-Password = 'Password' NAS-IP-Address = 10.133.16.125 NAS-Port = 18120 Message-Authenticator = 0x7ba3ce64279bce1f09a978dd7204ec3f (0) Received Access-Request packet from host 127.0.0.1 port 48534, id=168, length=77 (0) User-Name = 'username' (0) User-Password = 'Password' (0) NAS-IP-Address = 10.133.16.125 (0) NAS-Port = 18120 (0) Message-Authenticator = 0x7ba3ce64279bce1f09a978dd7204ec3f (0) # Executing section authorize from file /etc/raddb/sites-enabled/default (0) authorize { (0) filter_username filter_username { (0) if (!&User-Name) (0) if (!&User-Name) -> FALSE (0) if (&User-Name =~ / /) (0) if (&User-Name =~ / /) -> FALSE (0) if (&User-Name =~ /@.*@/ ) (0) if (&User-Name =~ /@.*@/ ) -> FALSE (0) if (&User-Name =~ /\\.\\./ ) (0) if (&User-Name =~ /\\.\\./ ) -> FALSE (0) if ((&User-Name =~ /@/) && (&User-Name !~ /@(.+)\\.(.+)$/)) (0) if ((&User-Name =~ /@/) && (&User-Name !~ /@(.+)\\.(.+)$/)) -> FALSE (0) if (&User-Name =~ /\\.$/) (0) if (&User-Name =~ /\\.$/) -> FALSE (0) if (&User-Name =~ /@\\./) (0) if (&User-Name =~ /@\\./) -> FALSE (0) } # filter_username filter_username = notfound (0) [preprocess] = ok (0) [chap] = noop (0) [mschap] = noop (0) [digest] = noop (0) suffix : Checking for suffix after "@" (0) suffix : No '@' in User-Name = "username", looking up realm NULL (0) suffix : No such realm "NULL" (0) [suffix] = noop (0) eap : No EAP-Message, not doing EAP (0) [eap] = noop (0) files : users: Matched entry DEFAULT at line 187 (0) [files] = ok rlm_sql (sql): Reserved connection (4) (0) sql : User not found in any groups rlm_sql (sql): Released connection (4) rlm_sql (sql): Closing connection (0), from 1 unused connections rlm_sql (sql): Closing connection (3): Hit idle_timeout, was idle for 540 seconds rlm_sql (sql): You probably need to lower "min" rlm_sql (sql): Closing connection (2): Hit idle_timeout, was idle for 540 seconds rlm_sql (sql): You probably need to lower "min" rlm_sql (sql): Closing connection (1): Hit idle_timeout, was idle for 540 seconds rlm_sql (sql): You probably need to lower "min" (0) [sql] = notfound (0) [expiration] = noop (0) [logintime] = noop (0) WARNING: pap : No "known good" password found for the user. Not setting Auth-Type (0) WARNING: pap : Authentication will fail unless a "known good" password is available (0) [pap] = noop (0) } # authorize = ok (0) Found Auth-Type = PAM (0) # Executing group from file /etc/raddb/sites-enabled/default (0) authenticate { pam_pass: using pamauth string <radiusd> for pam.conf lookup pam_pass: function pam_authenticate FAILED for <username>. Reason: Module is unknown (0) [pam] = reject (0) } # authenticate = reject (0) Failed to authenticate the user (0) Using Post-Auth-Type Reject (0) # Executing group from file /etc/raddb/sites-enabled/default (0) Post-Auth-Type REJECT { (0) [sql] = noop (0) attr_filter.access_reject : EXPAND %{User-Name} (0) attr_filter.access_reject : --> username (0) attr_filter.access_reject : Matched entry DEFAULT at line 11 (0) [attr_filter.access_reject] = updated (0) eap : Request didn't contain an EAP-Message, not inserting EAP-Failure (0) [eap] = noop (0) remove_reply_message_if_eap remove_reply_message_if_eap { (0) if (&reply:EAP-Message && &reply:Reply-Message) (0) if (&reply:EAP-Message && &reply:Reply-Message) -> FALSE (0) else else { (0) [noop] = noop (0) } # else else = noop (0) } # remove_reply_message_if_eap remove_reply_message_if_eap = noop (0) } # Post-Auth-Type REJECT = updated (0) Delaying response for 1 seconds Waking up in 0.3 seconds. Waking up in 0.6 seconds. (0) Sending delayed response (0) Sending Access-Reject packet to host 127.0.0.1 port 48534, id=168, length=0 Sending Access-Reject Id 168 from 127.0.0.1:1812 to 127.0.0.1:48534 Waking up in 3.9 seconds. (0) Cleaning up request packet ID 168 with timestamp +540 Ready to process requests (sorry about the long output). The lines that confuse me are: pam_pass: using pamauth string <radiusd> for pam.conf lookup pam_pass: function pam_authenticate FAILED for <username>. Reason: Module is unknown If I search for the file I get returned: [ ~]$ sudo find / -name "pam_google_authenticator.so" /usr/local/lib/security/pam_google_authenticator.so /home//google-authenticator-libpam/.libs/pam_google_authenticator.so [ ~]$ and I believe /usr/local/lib/security is the right place but I can't comment 100% on that one. Anyone have any ideas or advice where I can look next or if I am missing something obvious? Thanks. |
Windows DNS Server: 3 seconds before NXDOMAIN response Posted: 28 May 2022 05:02 PM PDT I'm wondering if there is a way to control the time required before an NXDOMAIN response is given. It seems to me that if a query is made for a record that doesn't exist, it takes 3 seconds before the server replies with NXDOMAIN. This appears to be the case in both Windows Server 2008 R2 and Windows Server 2012 R2. I have a packet capture below. For the purposes of my question, my server is authoritative for example.com and example2.com: 09:13:06.846116 IP 10.200.242.165.49505 > 10.200.1.13.53: 33831+ A? asr-1.example.com. (46) 09:13:09.855028 IP 10.200.1.13.53 > 10.200.242.165.49505: 33831 NXDomain* 0/1/0 (112) I pinged a host that I knew did not exist, but it seems to have taken a full 3 seconds before telling me it didn't exist (from 09:13:06 to 09:13:09). Lest you think it's forwarding somewhere to resolve asr-1.example.com, I get the same result if I do nslookup -norecurse asr-1.example.com. Just to show you that the server is indeed fast, here's a packet capture output from where I looked up the host on its actual domain: 09:13:09.855416 IP 10.200.242.165.49542 > 10.200.1.13.53: 47128+ A? asr-1.example2.com. (46) 09:13:09.855889 IP 10.200.1.13.53 > 10.200.242.165.49542: 47128* 1/0/0 (62) That is a 0.4 millisecond response... so it's not that my DNS server is slow. So... is there a way to configure the DNS server to not wait so long to return an NXDOMAIN response for a domain that it's authoritative for? |
Files in ignored folder still show in GitHub Desktop Posted: 28 May 2022 11:01 AM PDT I've just added a brand new local repository using GitHub Desktop . There are folders I don't want to commit, so I added these to .gitignore as follows: cache/*.* Now the problem is that the files within the ignored folder still show up. How can I automatically get rid of these files? I can check all the checkboxes manually, right click and choose ignore but that would be a lot of work. Or should I publish these files hoping they'll get ignored and won't show up in the future? |
WildFly with client certifactes: javax.net.ssl.SSLException: Received fatal alert: unknown_ca Posted: 28 May 2022 11:01 AM PDT I'm trying to install a wildfly9 server with client certificate authorization. To do that I have: On the Client: Create a self signed certificate: keytool -genkey -keystore client.keystore -validity 3650 -keyalg RSA -keysize 4096 -storetype pkcs12 -alias myClient
On the Client: export certificate keytool -exportcert -keystore client.keystore -alias myClient -storetype pkcs12 -file myClient.crt
On the server: Import the crt certificate file into the truststore keytool -import -file myClient.crt -keystore /etc/pki/wildfly/client.truststore
On the server: adjusting the wildfly config (enabling client certifacte authentication): <security-realm name="UndertowRealm"> <server-identities> <ssl> <keystore path="/etc/pki/wildfly/server.keystore" keystore-password="123456" alias="server" key-password="123456"/> </ssl> </server-identities> <authentication> <truststore path="/etc/pki/wildfly/client.truststore" keystore-password="123456"/> <local default-user="$local" skip-group-loading="true"/> <properties path="mgmt-users.properties" relative-to="jboss.server.config.dir"/> </authentication> </security-realm> ... <subsystem xmlns="urn:jboss:domain:undertow:2.0"> <server name="default-server"> <https-listener name="https" socket-binding="https" security-realm="UndertowRealm" verify-client="REQUIRED"/> ... </server> </subsystem>
The client is a python script. For this client I need the certifacte and the key in PEM format. To export the certificate in PEM I do (all on client side): Export certificate key: keytool -v -importkeystore -srckeystore client.keystore -srcalias myClient -destkeystore myClient.key.tmp.pem -deststoretype PKCS12 -destkeypass 123456
Remove the password from key (yes, of course I will also restrict the access to the key later by changing the file mode): openssl pkcs12 -in myClient.key.tmp.pem -nocerts -nodes > myClient.key.pem Remove all outside '-----BEGIN PRIVATE KEY-----' and '-----END PRIVATE KEY-----' of the myClient.key.pem Export the certificat as PEM: keytool -exportcert -keystore client.keystore -alias myClient -rfc -file myClient.pem But every time if I want connect the server I get (on the server) the error: 2016-10-31 09:50:55,102 DEBUG [io.undertow.request.io] (default I/O-1) Error reading request: javax.net.ssl.SSLException: Received fatal alert: unknown_ca at sun.security.ssl.Alerts.getSSLException(Alerts.java:208) at sun.security.ssl.SSLEngineImpl.fatal(SSLEngineImpl.java:1666) at sun.security.ssl.SSLEngineImpl.fatal(SSLEngineImpl.java:1634) at sun.security.ssl.SSLEngineImpl.recvAlert(SSLEngineImpl.java:1800) at sun.security.ssl.SSLEngineImpl.readRecord(SSLEngineImpl.java:1083) at sun.security.ssl.SSLEngineImpl.readNetRecord(SSLEngineImpl.java:907) at sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:781) at javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:624) at org.xnio.ssl.JsseSslConduitEngine.engineUnwrap(JsseSslConduitEngine.java:688) at org.xnio.ssl.JsseSslConduitEngine.unwrap(JsseSslConduitEngine.java:620) at org.xnio.ssl.JsseSslConduitEngine.unwrap(JsseSslConduitEngine.java:574) at org.xnio.ssl.JsseSslStreamSourceConduit.read(JsseSslStreamSourceConduit.java:89) at org.xnio.conduits.ConduitStreamSourceChannel.read(ConduitStreamSourceChannel.java:127) at io.undertow.server.protocol.http.HttpReadListener.handleEventWithNoRunningRequest(HttpReadListener.java:150) at io.undertow.server.protocol.http.HttpReadListener.handleEvent(HttpReadListener.java:128) at io.undertow.server.protocol.http.HttpReadListener.handleEvent(HttpReadListener.java:56) at org.xnio.ChannelListeners.invokeChannelListener(ChannelListeners.java:92) at org.xnio.conduits.ReadReadyHandler$ChannelListenerHandler.readReady(ReadReadyHandler.java:66) at org.xnio.nio.NioSocketConduit.handleReady(NioSocketConduit.java:88) at org.xnio.nio.WorkerThread.run(WorkerThread.java:539) If I disable the client certificate authentification all is fine. so there must be anything wrong with the client certificate authentification. Anybody knows whats wrong? |
Apache log : 'myipha.php not found or unable to stat' and others Posted: 28 May 2022 01:04 PM PDT I recently get in my apache error.log a long set of : [Thu Jun 30 13:03:22.005214 2016] [:error] [pid 15759] [client xx.xx.xxx.xxx:xxxxx] script '/var/www/html/myiphb.php' not found or unable to stat [Thu Jun 30 13:03:27.164111 2016] [:error] [pid 15644] [client xx.xx.xxx.xxx:xxxxx] script '/var/www/html/myiphb.php' not found or unable to stat [Thu Jun 30 13:03:32.314190 2016] [:error] [pid 15757] [client xx.xx.xxx.xxx:xxxxx] script '/var/www/html/myiphd.php' not found or unable to stat [Thu Jun 30 13:03:37.462354 2016] [:error] [pid 15514] [client xx.xx.xxx.xxx:xxxxx] script '/var/www/html/myiphd.php' not found or unable to stat [Thu Jun 30 13:03:42.559487 2016] [:error] [pid 15760] [client xx.xx.xxx.xxx:xxxxx] script '/var/www/html/myiphb.php' not found or unable to stat [Thu Jun 30 13:03:42.606343 2016] [:error] [pid 15760] [client xx.xx.xxx.xxx:xxxxx] script '/var/www/html/myiphd.php' not found or unable to stat [Thu Jun 30 13:03:42.653108 2016] [:error] [pid 15760] [client xx.xx.xxx.xxx:xxxxx] script '/var/www/html/myipha.php' not found or unable to stat [Thu Jun 30 13:03:42.699306 2016] [:error] [pid 15760] [client xx.xx.xxx.xxx:xxxxx] script '/var/www/html/myiphc.php' not found or unable to stat The PID belongs to apache. I couldn't find much on the internet about it. Anyone knows what it is ? |
Asterisk Cancel Transfer (atxfer) Posted: 28 May 2022 02:04 PM PDT I'm using the Asterisk 13 AMI to start a atxfer. It works so far. But how can I cancel the transfer action? Example: Bob calls Alice. Then after some time talking, Alice starts an automated transfer to Charles. Charles is not interested and but will not hang up, because of any reason. How can Bob now hangup Charles and get Alice back? If I'm using the disconnect feature (and the H dialplan option), Bob hangs up the call with Charles but does not get Alice back. Bob then has an ongoing silent call and Alice still listens to the MOH. If one of both hangs up, the other call hangs up automatically. Which feature or feature code is needed to get Alice back? Am I doing it right? |
Win7 loses connection to network shares after resume unless server specified using FQDN Posted: 28 May 2022 09:03 AM PDT My Win7 client has a connection to a Linux server and its shared folders. The problem occurs when the computer wakes up after a sleep and then one of the shared folders is not accessible. I receive the following message: Error code: 80070035, The network path was not found. I have problem with one specific folder only. When I restart the computer this problematic folder is accessible again. When I log off before sleep the folder is accessible after wakeup. If I try to access the folder by using the FQDN of the server or the server IP it is also accessible. As a temporary solution I mapped the folder to a network drive using the FQDN and it's working fine but it's inconvenient since every other folder is accessible on the server. To summarize: \\server\problematicshare no longer works after resume (the Samba server sees my client connect, then disconnects a few seconds later while I receive the above error message) \\server\othershare works after resume \\fqdn.of.server\problematicshare always works \\ip.of.server\problematicshare always works - once the problem manifests, I'm no longer able to restart the "Workstation" service (it is not responding)
- restarting the "Computer Browser" service has no apparent effect
- the event log doesn't contain anything that seems relevant
- "ping server" works
Link to packet dump: http://pastebin.ca/2836628 This packet trace was obtained on the client, using wireshark, right after resume from suspend. Explanation: - 192.168.1.110 is my client
- 192.168.3.255 is the local broadcast address (this is a /22 network)
- 192.168.0.58 is the Samba domain controller and also the server that shares the problematic share
- 192.168.1.254 is the DNS server
The packet trace has been post-processed (all IP addresses have been changed by replacing the prefix; domain, server and client names have been replaced with different strings of the same length). You'll note that the client tries to resolve "SERVERNAM." in DNS (i.e. without qualifying the name of the server), and that this results in nxdomain. It is plausible that if this lookup were to succeed, the connection to the share would work. However, "SERVERNAM" should be resolvable via WINS; also, what causes the change in behaviour when suspending? The same DNS lookup fails the same way before the suspend. There are also some samba log messages that are relevant and that will be interspersed in the packet trace at the appropriate points. [2014/08/28 09:54:56.541088, 0] rpc_server/srv_pipe.c:500(pipe_schannel_auth_bind) pipe_schannel_auth_bind: Attempt to bind using schannel without successful serverauth2 [2014/08/28 09:54:56.661321, 0] rpc_server/netlogon/srv_netlog_nt.c:976(_netr_ServerAuthenticate3) _netr_ServerAuthenticate3: netlogon_creds_server_check failed. Rejecting auth request from client WORKSTATION--7 machine account WORKSTATION--7$ (If there were a problem with the machine account as such, it would be just as impossible to access the shares using the fqdn of the server, so, while this may be relevant it's certainly not the root cause.) |
Apache Error - child process [xxx] still did not exit, sending a SIGTERM & ServerContext: 1 leaked_rewrite_drivers on destruction Posted: 28 May 2022 02:04 PM PDT I have many error log entries called ServerContext: 1 leaked_rewrite_drivers on destruction AND child process 10494 still did not exit, sending a SIGTERM does anyone has any idea on how to fix these errors ? |
No data received - Passenger, (52) Empty reply from server Posted: 28 May 2022 12:03 PM PDT I had setup Elastic Beanstalk for my rails application last month (Passenger). All was good.. but all of a sudden, my servers are not responding well. My application access gives 'No data received - Error 324 (net::ERR_EMPTY_RESPONSE): The server closed the connection without sending any data." on the browser. When I curl the application, I get curl: (52) Empty reply from server. Is it passenger issue ? I am a newbie to administering server. |
How to enable all HTTP methods in an Apache HTTP Server Posted: 28 May 2022 10:41 AM PDT How can I enable the handling of all HTTP methods as defined in RFC 2616 on Apache web server ? These would be: OPTIONS GET HEAD POST PUT DELETE TRACE CONNECT I am using the Apache HTTP Server, version 2.2.22 (Ubuntu) Here is my .htaccess File: <Location /output> Dav On <LimitExcept GET HEAD OPTIONS PUT> Allow from all </LimitExcept> </Location> Here is the output I get from running Telnet – There is no PUT method: Escape character is '^]'. OPTIONS / HTTP/1.0 HTTP/1.1 200 OK Date: Tue, 09 Oct 2012 06:56:42 GMT Server: Apache/2.2.22 (Ubuntu) Allow: GET,HEAD,POST,OPTIONS Vary: Accept-Encoding Content-Length: 0 Connection: close Content-Type: text/html Connection closed by foreign host. Any thoughts on this? |
dpkg: warning: files list file for package 'x' missing Posted: 28 May 2022 01:03 PM PDT I get this warning for several packages every time I install any package or perform apt-get upgrade . Not sure what is causing it; it's a fresh Debian install on my OpenVZ server and I haven't changed any dpkg settings. Here's an example: root@debian:~# apt-get install cowsay Reading package lists... Done Building dependency tree Reading state information... Done Suggested packages: filters The following NEW packages will be installed: cowsay 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. Need to get 21.9 kB of archives. After this operation, 91.1 kB of additional disk space will be used. Get:1 http://ftp.us.debian.org/debian/ unstable/main cowsay all 3.03+dfsg1-4 [21.9 kB] Fetched 21.9 kB in 0s (70.2 kB/s) Selecting previously unselected package cowsay. dpkg: warning: files list file for package 'libssh2-1:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libkrb5-3:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libwrap0:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libcap2:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libpam-ck-connector:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libc6:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libtalloc2:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libselinux1:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libp11-kit0:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libavahi-client3:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libbz2-1.0:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libpcre3:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libgpm2:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libgnutls26:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libavahi-common3:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libcroco3:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'liblzma5:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libpaper1:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libsensors4:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libbsd0:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libavahi-common-data:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libss2:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libblkid1:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libslang2:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libacl1:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libcomerr2:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libkrb5support0:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'e2fslibs:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'librtmp0:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libidn11:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libpcap0.8:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libattr1:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libdevmapper1.02.1:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'odbcinst1debian2:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libexpat1:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libltdl7:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libkeyutils1:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libcups2:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libsqlite3-0:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libck-connector0:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'zlib1g:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libnl1:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libfontconfig1:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libudev0:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libsepol1:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libmagic1:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libk5crypto3:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libunistring0:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libgpg-error0:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libusb-0.1-4:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libpam0g:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libpopt0:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libgssapi-krb5-2:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libgeoip1:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libcurl3-gnutls:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libtasn1-3:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libuuid1:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libgcrypt11:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libgdbm3:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libdbus-1-3:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libsysfs2:amd64' missing; assuming package has no files currently installed dpkg: warning: files list file for package 'libfreetype6:amd64' missing; assuming package has no files currently installed (Reading database ... 21908 files and directories currently installed.) Unpacking cowsay (from .../cowsay_3.03+dfsg1-4_all.deb) ... Processing triggers for man-db ... Setting up cowsay (3.03+dfsg1-4) ... root@debian:~# Everything works fine, but these warning messages are pretty annoying. Does anyone know how I can fix this? ls -la /var/lib/dpkg/info | grep libssh : -rw-r--r-- 1 root root 327 Sep 21 15:51 libssh2-1.list -rw-r--r-- 1 root root 359 Aug 15 06:06 libssh2-1.md5sums -rwxr-xr-x 1 root root 135 Aug 15 06:06 libssh2-1.postinst -rwxr-xr-x 1 root root 132 Aug 15 06:06 libssh2-1.postrm -rw-r--r-- 1 root root 20 Aug 15 06:06 libssh2-1.shlibs -rw-r--r-- 1 root root 4377 Aug 15 06:06 libssh2-1.symbols |
change Access Permissions in Component Services > COM Security with script/api? Posted: 28 May 2022 12:03 PM PDT How do I go about changing the Access Permissions for the COM Security? I need to write new values to "Edit Limits..." and "Edit Default..." |
No comments:
Post a Comment