Saturday, May 15, 2021

Recent Questions - Server Fault

Recent Questions - Server Fault


SSH public key denied Kali Nethunter

Posted: 15 May 2021 10:30 PM PDT

am not able to ssh to Amazon AWS from my kali nethunter device nexus 6p I get this error public key denied.

However I can easily ssh from my desktop without any issues, computer that has kali linux directly installed.

Here is the error.

──(root💀kali)-[/sdcard/Download] └─# ssh -i kali-cloud.pem ec2user@ec2-x-x-x-x.us-east-2.compute.amazonaws.com @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @@@@@@@@@ @ WARNING: UNPROTECTED PRIVATE KEY FILE! @ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @@@@@@@@@ Permissions 0660 for 'kali-cloud.pem' are too open. It is required that your private key files are NOT accessible by others. This private key will be ignored. Load key "kali-cloud.pem": bad permissions ec2user@ec2-x-x -x.us-east-2.compute.amazonaws.com: Permission denied (publickey).

Kindly suggest.

Regards Skorp

I want to get aws kinesis data using Kubeless

Posted: 15 May 2021 09:17 PM PDT

I built a controller that detect writing to kinesis. (kinesistriggers.kubeless.io)enter link description here

When this controller detects a write to kinesis python function associated with the stream is executed.

but, although the write to kinesis was successful, the function was not executed.

This python just prints (event).

def hello(event, context):    print(event)    return event['data']  

The environment is as follows

$kubeless function  NAME    NAMESPACE       HANDLER         RUNTIME         DEPENDENCIES    STATUS  hello   default         test.hello      python3.7                       1/1 READY    $kubeless trigger kinesis list  NAME            NAMESPACE       REGION          STREAM          SHARD                   FUNCTION NAME  test-trigger    default         ap-northeast-1  k8stest   shardId-999                   hello    $aws kinesis describe-stream --stream-name k8stest  {  "StreamDescription": {      "KeyId": null,      "EncryptionType": "NONE",      "StreamStatus": "ACTIVE",      "StreamName": "k8stest",      "Shards": [          {              "ShardId": "shardId-999"          }      ],    }  

Could you please give some overview on the migration approach for Migration from Azure SQL Single Instance/SQL VM to Azure Managed Instance

Posted: 15 May 2021 09:12 PM PDT

Could you please give some overview on the migration approach for Migration from Azure SQL Single Instance/SQL VM to Azure Managed Instance. Could you please provide inputs on the following things: Backup /restore strategy used for the migration.

Can we use Container stored backup files and restore to MI? If so what are the basic connection approach we need to follow. Or Should we go with DMS (offline). I tried this but no luck. Please explain what can be done.

I dint find any relevant content or approach on any blogs or site for this.

Why are ARP requests not being received by all hosts on other switches? (Ubiquiti switches)

Posted: 15 May 2021 06:27 PM PDT

  • I have this wired Ethernet network topology, illustrated below
    • As far as this question is concerned, there is no Wi-Fi.
  • At one end, is a Windows Server 2012 R2 box which is a Domain Controller and DNS Server.
    • Attached to the same switch is another Windows Server box.
  • At the other end, through 3 switches (USW Lite 1, USW, USW Lite 2) is an Apple TV 4k.
[Windows Server Domain Controller + DNS Server]     |  [USW Lite 1]---[Another server]     |  [USW]----[UDM Pro]---(Internet)     |  [USW Lite 2]     |  [Apple TV]  
  • For reasons unknown, ever since last Tuesday (patch tuesday?) the Apple TV has been unable to communicate with the DNS Server and vice-versa (I've hard-reset the Apple TV to no avail), this means the Apple TV cannot perform DNS queries unless I disable the DHCP-assigned DNS server and use a public DNS server, but then the Apple TV can't resolve other Windows domain hosts on the network by name.
  • The ping utility on either Windows Server boxes fails to contact the IPv4 address of the Apple TV: it fails with the error "Destination host unreachable"
  • All devices have identical subnet masks and everything except the Domain Controller is using DHCP, so their IP addresses are all in-range:
    • 172.16.1.1 for the UDM Pro.
    • 172.16.1.2 for the Domain Controller and DNS.
    • 172.16.1.100-199 is the DHCP address range.
    • 172.16.1.101 is the Apple TV.

Troubleshooting time:

  • I fired up Wireshark on the Domain Controller (DC) to see why the ping 172.16.1.101 was failing.
    • I saw the DC was broadcasting ARP requests to get the MAC address for 172.16.1.101 but it never received any responses.
    • I plugged my laptop into the central USW (but not the USW Lite 2 just yet) and fired up Wireshark there to see if the ARP broadcasts were at least making it that far through the network and indeed they were.
      • So the central USW isn't dropping ARP packets, at least.
  • When I ping the Apple TV from my laptop while running Wireshark on my laptop, Wireshark shows the Apple TV responding to the ARP requests from my laptop.
    • The ARP requests that were broadcasted from the DC were identical to my laptop's ARP request broadcasts (with the exception of the destination address, of course).
      • So I'm assuming that what's happening is either:
        • The Apple TV is receiving the ARP broadcasts from the DC and is ignoring them for some reason.
          • or - the [USW Lite 2] is dropping the ARP broadcasts from the DC before the Apple TV can get them, but the switch does let my laptop's ARP broadcasts go through. I wondered if it might be because of some kind of TTL, but ARP packets don't have TTLs...
  • Another experiment: I changed the Apple TV's network configuration to use a static IP address (172.16.1.3), and changing its IP address should trigger an ARP broadcast across the whole network that all hosts should see.
    • My laptop, still plugged into the central USW, did see the Apple TV's ARP Announcement broadcasts, however the DC did not.
      • So this shows the ARP messages likely don't seem to be travelling beyond 2 levels of switches...

I looked through my Ubiquiti Uni-Fi configuration and none of the switches and none of their ports have any settings enabled that would seemingly restrict ARP broadcast domains or establish VLANs.

Unfortunately I'm not currently able to plug my laptop directly in to the [USW Lite 2] switch adjacent to the Apple TV - which is a shame as that would definitely reveal if the switches were forwarding the DC's ARP broadcasts to the Apple TV - but the fact that the Apple TV's ARP broadcasts were being received by my laptop and not the server suggests that that's what is happening anyway.

So why are the ARP broadcast packets being dropped after 2 hops?

Pulling multiple cables through conduit - pull one at a time, or all at once?

Posted: 15 May 2021 05:54 PM PDT

I've got a length of 2" conduit (225 feet) that I need to pull several lengths of Ethernet and fiber cables through. There's already one length of fiber running down the conduit. What's the best way to get the new ones through the conduit? I haven't been able to find any specific guidance. I've pulled single lengths of cable through conduit before, but haven't had to do several in the same pipe. So, big bucket of cable lubricant in hand, do I:

  • A: Attach all the cables to the same pull head and pull multiple cables at the same time as a bundle.
  • B: Pull one cable and a pull string/rope together first, then repeat for each additional cable (so 5 cables = 5 pulls)

One-liner to install Python 3.9 on Amazon Linux?

Posted: 15 May 2021 05:50 PM PDT

Is there a simple, one-liner yum-like command to install Python 3.9 on Amazon Linux?

[root@ip-10-0-0-182 ~]# yum install python39  Loaded plugins: extras_suggestions, langpacks, priorities, update-motd  No package python39 available.  Error: Nothing to do  

The top hit on Google is currently this article which tells you to download sources and build it yourself. I don't want that.

A simple configuration: LAN + direct connect simultaneously

Posted: 15 May 2021 10:26 PM PDT

This should be an easy configuration, but I'm struggling to Google for it; the vocabulary eludes me.

I've got a desktop with onboard gigabit which finds its way to the internet via the router. Naturally.

I've got a server with onboard gigabit which finds its way to the internet in much the same way. No surprises.

The router provides DHCP to the clients.

I grabbed a couple of old Intel 10GbE cards from eBay and am keen to create a 10Gbit direct connection between my desktop and the server. I've got the cards in and the cables run. I assigned IPs to all four ports (two on each card) statically and connected one port on each. When I enable the interfaces, I get a reported 10Gb connection, but cannot figure a means by which to establish communication between the two.

Here's a map, for completeness' sake. enter image description here

I've touched nothing with regards to routing tables. ip route shows whatever it defaulted to on both machines. On the desktop, for example:

[patrick@manifold-arch ~]$ ip route  default via 192.168.4.1 dev enp4s0 proto dhcp metric 100   192.168.4.0/22 dev enp4s0 proto kernel scope link src 192.168.4.73 metric 100   192.168.5.0/24 dev enp9s0f1 proto kernel scope link src 192.168.5.203 metric 101   

And all I've tried to do is ping back and forth as well as SSHing from server to desktop (I've got sshd enabled on my desktop). Ping says the destination is unreachable and SSH says no route to host. I know the topology is nonstandard, but beyond ensuring the physical connection, I'm not sure how to troubleshoot this. Any pointers or direction would be most welcome!

Why a download from Hyper-V VM to host via WinSCP starts fast then slows down to kilobytes per second?

Posted: 15 May 2021 06:06 PM PDT

I created a VM (gen 2). I installed Ubuntu Server 20.04. It is connected to my physical network adapter via External Switch. Then I connect to it with WinSCP to download a 2GB ISO file and it takes ages. I noticed the same behavior when downloading via WinSCP from my Azure server.

The download starts fast, I get like 50MB/s, then it drops to like 100kB/s. Sometimes during the proces I get a higher speed for a while, then it drops again.

Is it WinSCP bug, or what? It's not even a real network connection, it's a transfer between VM and the host, it should be super fast. I can download the ISO over the Internet with a torrent client in 1 minute. The same ISO downloaded from my VM takes over 20 minutes to download.

The VM has 8 cores and 4GB of RAM. My host has 16 cores and 16GB of RAM. I don't even see high resources usage during downloads. My host and VM don't do anything special during downloading the file.

Is there a faster way to transfer files between Ubuntu Server VMs and Windows hosts?

What am I doing wrong?

When I run `sudo passwd` I set my password and the script exits okay. Running `sudo chsh -s /bin/bash` immediately after fails

Posted: 15 May 2021 03:37 PM PDT

Long time lurker, first time poster.

My linux server is behaving strangely. When I run sudo passwd, I can set my password up correctly. However, if I run sudo chsh -s /bin/bash, the command returns the error:

chsh: PAM: Authentication failure  

When I run sudo echo "hi", I get as output:

hi  

I can provide more details if requested. I have repeated this step three times and I am 100% sure that I am typing my password correctly.

$ cat /etc/*release

DISTRIB_ID=Ubuntu  DISTRIB_RELEASE=20.04  DISTRIB_CODENAME=focal  DISTRIB_DESCRIPTION="Ubuntu 20.04.1 LTS"  NAME="Ubuntu"  VERSION="20.04.1 LTS (Focal Fossa)"  ID=ubuntu  ID_LIKE=debian  PRETTY_NAME="Ubuntu 20.04.1 LTS"  VERSION_ID="20.04"  HOME_URL="https://www.ubuntu.com/"  SUPPORT_URL="https://help.ubuntu.com/"  BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"  PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"  VERSION_CODENAME=focal  UBUNTU_CODENAME=focal  

Thank you!

screenshot of terminal

Is it possible to configure Dynamic VLANS with FreeRadius on a Ubiquity Access Point?

Posted: 15 May 2021 03:31 PM PDT

I am trying to use Dynamic VLANS on a UAP. I can get non-VLAN radius to work perfectly fine through FreeRadius. I want one SSID to be broadcasted and depending on your username, and password, is which VLAN you're placed on.

I have 5 or 6 different VLANS setup (101-106) and the third octet of the IP Address corresponds with the VLAN ID. (VLAN 101 is 10.17.1.0/24, VLAN 102 is 10.17.2.0/24, etc...)

My main VLAN is VLAN 101 (10.17.1.0/24) which is also my management VLAN. The AP (10.17.1.2), and the Radius server (10.17.1.3), are both connected to this network via my Nortel 5520-48T-PWR switch. All VLANS originate from my router (ER-X), and are trunked to my switch (Nortel 5520-48T-PWR)

If I broadcast each individual VLAN from the UAP, they all work so it most likely isn't an issue with tagging.

My UAP's configuration is as followed: [Wireless Network]: https://i.stack.imgur.com/GPRJV.png [ Radius Profile ]: https://i.stack.imgur.com/FHYfW.png

My FreeRadius configuration is as followed: [users]: https://i.stack.imgur.com/VylZ5.png [clients.conf]: https://i.stack.imgur.com/txquk.png

When I execute a radtest command (using the credentials in my users file) I get a success:

radtest vlan101 password 10.17.1.3 1812 12345678, I receive:

    User-Name = "vlan101"      User-Password = "password"      NAS-IP-Address = 127.0.1.1      NAS-Port = 1812      Message-Authenticator = 0x00      Cleartext-Password = "password"  Received Access-Accept Id 97 from 10.17.1.3:1812 to 10.17.1.3:51227 length 37      Tunnel-Type:0 = VLAN      Tunnel-Medium-Type:0 = IEEE-802      Tunnel-Private-Group-Id:0 = "101"  

When I don't use VLANS, the AP works perfectly fine with radius. However, when I use the configuration as shown above, I get an errer from my end-device trying to connect saying Couldn't Connect to "My SSID"

Anyone have any ideas? I'm open to any suggestions!

If you need to see more config than what I gave you, let me know.

Thanks a-lot!

SSH connect to a local machine through a router

Posted: 15 May 2021 09:35 PM PDT

I am not sure if I am framing this question correctly, as I do not know about networks. Please be patient. So I have a machine that is listening on the IP 10.0.2.46. So, if I wanted to connect to the machine, I just issue,

ssh user@10.0.2.46  

enter the password and I get the terminal on that machine. This is when my computer is directly connected to the local area network. Here is /etc/dhcpcd.conf for my computer.

$ tail -n 3 /etc/dhcpcd.conf     static ip_address=10.0.2.36/23  static routers=10.0.2.1  static domain_name_servers=1.1.1.1 9.9.9.9  

Now, due to unavailability of physical port, I had to connect through a router. It's a TP-Link Archer C50. In the router configuration at 192.18.0.1, I manually configured the above ip address, gateway and DNS nameserver. I'm connected to the internet. I can ping Cloudflare. But I cannot ping/ssh to the local machine, issuing the same command

ssh user@10.0.2.46  

nothing happens and the there is no output or error. I don't get my prompt back. It just stuck.

EDIT: Here is a diagram on how my machines are connected. enter image description here

If I want to connect from M1 then simply issuing,

ssh user@10.0.2.46  

connects me. This was my previous setup. I want to SSH 10.0.2.46 from M2 now. How do I do that?

My question is, how can I connect to the local machine while connected through the router?

Client not able to connect to FTP server (only filezilla works)

Posted: 15 May 2021 09:47 PM PDT

I have created a FTP Server on a Amazon-EC2 instance following this DigitalOcean Tutorial, I can login normally and that's the only part that works remotely using the shell(connecting from server to localhost doesn't give any problem), but any command that i give to FTP client( e.g. ls or put source destination) doesn't work. The tutorial sets the server to work in PASSIVE mode, but while looking for solutions online and the people had a problem with mode X, the solution was switching from mode X to mode Y and viceversa.


FTP (verbose mode) gives two different outputs:

  • While connection is on ACTIVE mode
ftp> dir  500 Illegal PORT command.  ftp: bind: Address already in use  
  • When connection is on PASSIVE mode
ftp> dir  227 Entering Passive Mode (addr, of, my, server, port1, port2).  ftp: connect: Connection timed out    

Although the command line FTP client can't perform any operation different from login, Filezilla can. I guess the solution lies in filezilla status logs:

Status: Connecting to addr:21...  Status: Connection established, waiting for welcome message...  Status: Insecure server, it does not support FTP over TLS.  Status: Server does not support non-ASCII characters.  Status: Logged in  Status: Retrieving directory listing...  Status: Server sent passive reply with unroutable address. Using server address instead.  Status: Calculating timezone offset of server...  Status: Timezone offset of server is 0 seconds.  Status: Directory listing of "/" successful  

For your information, using Filezilla is not an option: I just installed it to have a good looking client to perform basic operations and tests, but it turned out to be the only client that could do something.

Postfix "loops back to myself" only when trying to send to fallback relay

Posted: 15 May 2021 06:53 PM PDT

I have read the many answers to many similar questions, but have not found exactly this question or any answer that helped.

I have a Postfix 2.11 server (mail-server.example.com) that is strictly for outgoing email. It works fine if it is able to send the email on the first try. However, for performance reasons, if it cannot send the email on the first try and instead wants to defer it, I have configured smtp_fallback_relay so that it forwards the email to another server (deferred-mail.example.com) that handles only deferred emails.

smtp_fallback_relay = [deferred-mail.example.com]  

If, from my trusted IP (mynetworks), I send an email to either server destined for someone@gmail.com, either server will accept it and try to send it to Gmail. Because my server IP is new, Gmail rejects the email from either server with 421 (temporary deferral). As expected, the deferred-mail server simply puts the message in its deferred queue and tries again later.

The weird part is that while mail-server will send the message to Gmail to begin with, after receiving the 421, it attempts to send the mail to deferred-mail, but gives up and instead bounces the message, saying "mail for gmail.com loops back to myself". I see no logs on the deferred-mail server saying anything about what is going on.

So both servers are configured to accept mail and relay it to Gmail if the client is trusted, and the deferred-mail trusts mail-server (I can send to a Gmail address when directly connecting to deferred-mail from `mail-server), so how is Postfix coming up with this "loop" and why is it bouncing the email instead of relaying it?


Update and clarification about the logs below.

For privacy reasons, I am not posting the actual domain name or IP addresses we are using. Throughout this post and in the log excerpt below, I have replaced the actual root domain name we are using with example.com.

This setup was copied from a working email cluster. I am confident that it is theoretically sound, and suspect it is just a matter of some setting not being correctly replaced or translated. It is just that with this odd behavior and seemingly incorrect error message, I cannot figure out what to change, and that is why I am asking for help.

  • "Greeted me with my own hostname" is not a mistake or problem

When sending emails via SMTP, the sender begins by identifying itself by host name. Big mail processors like Gmail and Yahoo take several steps to assess the reputation of the sender. With IPv4, two of the first things they did were

  1. Record the IP address of the sender and build up a record of the quality and quantity of emails coming from that specific IP address
  2. Verify that the reverse DNS record for that IP address maps back to the host name the sender identified itself with

If the IP address has a bad reputation or the reverse DNS does not match the sender host name, the receiver would be much more likely to mark the email as Spam regardless of anything else.

For both these reasons, both mail-server.example.com and deferred-mail.example.com are behind the same NAT Gateway that

  1. Has a static IP address, so all email comes from the same IP address with the same reputation (REDACTED-PUBLIC-IPv4 in the log excerpt below)
  2. Has a reverse DNS record that identifies that IP address as mail-server.example.com

For that reason, when deferred-mail.example.com identifies itself in an SMTP session, it identifies itself as mail-server.example.com. This is correct, desired behavior and not an uncommon situation for the reasons stated above. (deferred-mail.example.com points to a private IP address, 10.111.14.181 in the logs below and does not have a reverse DNS record.)

  • IPv6 failure is not a mistake or problem

Our server stack does not currently have IPv6 enabled. Because we are using a new domain name, we have zero domain reputation, so we want to stick to IP reputation until we have a solid domain reputation established. (Because IPv6 addresses are disposable, they will never be an adequate basis for reputation.)

Therefore, the failures to reach Gmail's IPv6 servers aspmx.l.google.com and alt2.aspmx.l.google.com are expected and not particularly a problem, although if there is a setting that I can set to keep postfix from even trying to reach IPv6 addresses, please let me know so I can make that change.

Answer to disabling IPv6 attempts: Add this to your main.cf

inet_protocols = ipv4  

Per request, here are redacted logs of the error:

postfix/smtpd[26668]: 50F7C5B81: client=ip-10-111-7-75.us-west-2.compute.internal[10.111.7.75]  postfix/cleanup[26682]: 50F7C5B81: message-id=<>  opendkim[36]: 50F7C5B81: can't determine message sender; accepting  postfix/qmgr[143]: 50F7C5B81: from=<REDACTED-SENDER>, size=279, nrcpt=1 (queue active)  postfix/smtp[26686]: 50F7C5B81: host aspmx.l.google.com[74.125.197.26] said: 421-4.7.0 [REDACTED-PUBLIC-IPv4      15] Our system has detected that this message is 421-4.7.0 suspicious ...   postfix/smtp[26686]: connect to aspmx.l.google.com[2607:f8b0:400e:c07::1a]:25: Cannot assign requested address  postfix/smtp[26686]: connect to alt2.aspmx.l.google.com[2607:f8b0:4001:c56::1a]:25: Cannot assign requested address  postfix/smtp[26686]: 50F7C5B81: host alt2.aspmx.l.google.com[142.250.152.27] said: 421-4.7.0 [REDACTED-PUBLIC-IPv4      15] Our system has detected that this message is 421-4.7.0 suspicious ...  postfix/smtp[26686]: warning: host deferred-mail.example.com[10.111.14.181]:25 greeted me with my own hostname mail-server.example.com  postfix/smtp[26686]: warning: host deferred-mail.example.com[10.111.14.181]:25 replied to HELO/EHLO with my own hostname mail-server.example.com  postfix/smtp[26686]: 50F7C5B81: to=<someone@gmail.com>, relay=deferred-mail.example.com[10.111.14.181]:25, delay=26, delays=24/0.01/1.5/0, dsn=5.4.6, status=bounced (mail for gmail.com loops back to myself)    log messages here about sending bounce email to sender  have been removed     postfix/bounce[26688]: 50F7C5B81: sender non-delivery notification: AEA8A5B83  postfix/qmgr[143]: 50F7C5B81: removed  

On Premise NPS server check computer account in Azure AD

Posted: 15 May 2021 03:11 PM PDT

I've got a Windows based NPS Radius server for authenticating my wireless clients based on device certificates (supplied by my internal CA). In my NPS network policy I have set conditions to grant access only when that the computer is a member of the group Domain Computers, the computer account not disabled etc. We are slowly shifting towards Azure AD, meaning computers are no longer a member of our on premise AD. I found this article https://docs.microsoft.com/en-us/azure/active-directory/authentication/howto-mfa-nps-extension but this only for authenticating users through Azure AD, not computers. Is there something similar available so my on-premise NPS server can validate the Azure AD joined computers in Azure? Thanks!

Execution of ExecStop when script in ExecStart exits

Posted: 15 May 2021 03:53 PM PDT

I am a little confused with the behavior of systemd services. I have the above systemd service.

[Unit]  After=libvirtd.service    [Service]  Type=simple  Environment=VM_XML=xxxxxxx  ExecStartPre=/usr/bin/bash /usr/local/lib/common/createQcowImage.sh ${VM_XML}  ExecStart=/usr/bin/bash /usr/local/lib/common/createVM.sh ${VM_XML}  ExecStop=/usr/bin/bash /usr/local/lib/common/destroyVM.sh ${VM_XML}  Restart=always    [Install]  WantedBy=multi-user.target  

When this unit starts inside the createVM.sh script it creates a VM and it monitors it state. In case the PID for the VM does not exist any more the script exits with return code 1. What I noticed is that when this happens the ExecStop is executed (I was manitoring the /var/log/messages and when I destroyed manually the VM with virsh destroy I sow the echo that I put for debugging inside the script executed from ExecStop to be printed in /var/log/messages). Is this default behaviour of systemd? To execute the ExecStop when unit exits (I also tried to exit with code 0 and it was same bahaviour). This is not what I really want to do because after I destroy a VM the ExecStop tries to destroy the same VM which bring a systemd unit failure. Is there any way to avoid that?

establishing IKE_SA failed, peer not responding

Posted: 15 May 2021 05:09 PM PDT

I'm new with this VPN things. I'm using Strongswan 5.8.2 with swan config for establish my SA and using PSK. Im integrating with a company to provide me some services and they gave me a gateway server IP which is reachable when i ping it. At my side, swanctl can load connection and systemctl running well but the logs shows "establishing IKE_SA failed, peer not responding" and "error writing socket: Network Unreachable" after I initiate the connection. I used CentOS 8 for this.

this is my swanctl configuration:

connections {      site-2-site {          version = 1          local_addrs = public-IP-site1          remote_addrs = public-ip-site2          local_port = 500          remote_port = 500          proposals = aes256-sha1-modp1536          keyingtries = 1          rekey_time = 86400s          local {                      auth = psk                      id = public-IP-site1          }          remote {              auth = psk                      id = public-ip-site2          }          children {                      site-2-site {                              esp_proposals = aes128-sha1                              local_ts = private-ip-site1                              remote_ts = private-ip-site2                              life_time = 3600s                  mode = tunnel                  }          }      }  }    secrets {          secret = ThisIsPSKkey          id-1a = public-ip-site1          id-1b = public-ip-site2  }  

logs when I initiate the connection:

12[CFG] vici initiate CHILD_SA 'stickearn-to-cimb'  13[IKE] initiating Main Mode IKE_SA stickearn-to-cimb[3] to public-ip-site2  13[ENC] generating ID_PROT request 0 [ SA V V V V V ]  13[NET] sending packet: from public-ip-site1[500] to public-ip-site2[500] (184 bytes)  04[NET] error writing to socket: Network is unreachable  13[IKE] sending retransmit 1 of request message ID 0, seq 1  13[NET] sending packet: from public-ip-site1[500] to public-ip-site2[500] (184 bytes)  04[NET] error writing to socket: Network is unreachable  11[IKE] sending retransmit 2 of request message ID 0, seq 1  11[NET] sending packet: from public-ip-site1[500] to public-ip-site2[500] (184 bytes)  04[NET] error writing to socket: Network is unreachable  08[IKE] sending retransmit 3 of request message ID 0, seq 1  08[NET] sending packet: from public-ip-site1[500] to public-ip-site2[500] (184 bytes)  04[NET] error writing to socket: Network is unreachable  13[IKE] sending retransmit 4 of request message ID 0, seq 1  13[NET] sending packet: from public-ip-site1[500] to public-ip-site2[500] (184 bytes)  04[NET] error writing to socket: Network is unreachable  10[IKE] sending retransmit 5 of request message ID 0, seq 1  10[NET] sending packet: from public-ip-site1[500] to public-ip-site2[500] (184 bytes)  04[NET] error writing to socket: Network is unreachable  07[IKE] giving up after 5 retransmits  07[IKE] establishing IKE_SA failed, peer not responding  

is it problem with firewall or anything else? i kindly need your help.

php-fpm status page not found on docker

Posted: 15 May 2021 10:00 PM PDT

I Have two docker containers for php-fpm and nginx, i run it with docker-compose.yml file, but i try to setup the status page of my php-fpm and it not found.

i acces to http://localhost/status url and it show File not found message.

i tryed too many configurations but it not works.

my direcoty tree is it:

C:.  │   docker-compose.yml  │  ├───app  │       index.php  │  ├───fpmConf  │       www.conf  │  ├───nginxConf  │       default.conf  │  └───phpConf          php.ini  

my docker-compose.yml file is:

version: '3'  services:      php:      image: php:7.2-fpm      container_name: php_app      restart: always      environment:        TZ: America/Mexico_City      ports:        - 9000:9000      volumes:        - ./app:/var/www/html        - ./phpConf/php.ini:/usr/local/etc/php/conf.d/custom.ini      tty: true      nginx:      image: nginx:latest      container_name: nginx_proxy      restart: always      environment:        TZ: America/Mexico_City      ports:        - 80:80      links:        - php      volumes:        - ./app:/var/www/html        - ./nginxConf:/etc/nginx/conf.d        - ./fpmConf/www.conf:/etc/php-fpm.d/www.conf      depends_on:        - php      tty: true  

my www.conf file is:

user = www  listen = php:9000    pm = ondemand  pm.max_children = 50  pm.max_requests = 500  pm.process_idle_timeout = 10s  pm.status_path = /status    ping.path = /ping    slowlog = /data/logs/php-fpm-slow.log  request_slowlog_timeout = 60  catch_workers_output = yes    env[TMPDIR] = /data/tmp/php    include = /data/conf/php-fpm-www-*.conf  

my default.conf file is:

upstream php-upstream {      server php:9000;  }    server {      server_tokens off;         charset utf-8;      client_max_body_size 128M;        proxy_buffer_size   128k;      proxy_buffers   4 256k;      proxy_busy_buffers_size   256k;        listen 80;      server_name localhost;          # set caching header for static files      location ~* \.(jpg|jpeg|gif|png|css|js|ico|svg|ttf|webp)$ {           access_log          off;           log_not_found       off;           expires             30d;      }        root /var/www/html;      index index.php;        location / {                try_files $uri $uri/ /index.php$is_args$args;                location ~ \.php$ {                  fastcgi_split_path_info ^(.+?\.php)(/.*)$;                  if (!-f $document_root$fastcgi_script_name) {                  return 404;              }                include fastcgi_params;              fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;              fastcgi_param PATH_INFO $fastcgi_path_info;              fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info;              fastcgi_buffers 16 16k;              fastcgi_buffer_size 32k;              fastcgi_connect_timeout 300;              fastcgi_send_timeout 300;              fastcgi_read_timeout 300;              fastcgi_pass php:9000;              fastcgi_index index.php;              fastcgi_pass_header Set-Cookie;              fastcgi_pass_header Cookie;              fastcgi_ignore_headers Cache-Control Expires Set-Cookie;            }      }        location /status {          access_log off;          include fastcgi_params;          fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;          fastcgi_pass php-upstream;          fastcgi_index status.html;      }      }  

my php.ini file is:

date.timezone = "America/Mexico_City"    default_charset = "utf-8";  mbstring.internal_encoding=utf-8;  mbstring.http_output=UTF-8;  mbstring.encoding_translation=On;    session.use_cookies = 1  session.name = PHPSESSID  session.cookie_lifetime = 0  session.cookie_path = /  session.cookie_domain = alocalhost  session.serialize_handler = php    session.gc_maxlifetime = 86400  max_input_vars = 1000000  

(internal) Packet capture in a google cloud VPC network?

Posted: 15 May 2021 03:04 PM PDT

I have a VPC network set up in google cloud with a few instances running. One of these instances serves as a VPN machine, allowing me to interact with the instances from the internet.

I want to capture traffic:

  1. From the internet to the VPC network. For this I simply use tcpdump on the vpn machine.
  2. Internally, i.e. packets between the instances. And that I don't know how to do. I cannot rely on my instances (using tcpdump directly in them isn't an option in my case). Instead, because in VPCs there is no "real" network layer 2 as it is virtualized, I was hoping that it would be possible to somehow tap into the cloud router and capture all packets from there, but it doesn't seem to be possible. Or is it?

Does anybody have an idea on what I could do here? Would be much appreciated. Thanks!

EVENTCREATE fails with "ERROR: Access is denied" -- how to fix

Posted: 15 May 2021 07:01 PM PDT

I have also asked this question in a Microsoft Forum, but no answers there yet.

I am in the process of building out a Windows 2012 server to replace a legacy Windows 2008 server. I am testing a legacy batch script that logs informational or system events under different circumstances. However, the script is failing with the error "Access is denied".

I did some debugging and discovered the failure is with the EVENTCREATE line. Below is an example of the command and it's failure:

C:\SCRIPTS>EVENTCREATE /t WARNING /d "testing" /id 411  ERROR: Access is denied.  

If I execute this command as myself (administrator) it works fine. But if one of the users executes it, it fails.

I did google around as well as check stackoverflow and it's sibling sites, and observed that this sort of behavior was reported when the /so parameter is used. However, I am not using the /so parameter.

I've confirmed that the user has access to the EVENTCREATE command. If they invoke it with /? they see the help contents for it.

Is there some sort of group policy permission setting I need to adjust? Something else?

Thanks in advance for your help.

Exchange, retrieve the message found from message trace

Posted: 15 May 2021 09:03 PM PDT

I am running exchange online via office 365. We had a customer email that our user could not locate. I ran a message trace and it found the message. We ran a search in the users outlook but the message does not show up there. How can I locate and/or forward that message to view it in it's entirety and get the attachment?

Mount command hanging after attaching EBS volumes on AWS

Posted: 15 May 2021 09:03 PM PDT

I'm creating four volumes from four EBS snapshots on AWS. The four volumes make up one BTRFS multi-device drive. The drives have no partition tables, i.e I've ran mkfs.btrfs /dev/xvdf /dev/xvdg etc.

When I create the volumes I attach them with the AWS CLI tool then run btrfs device scan /dev/xvd{f,g,h,i}. The devices have a label, so to mount them I run (I've tried a few things) mount -t btrfs -o ro,nospace_cache,clear_cache -L LABEL /tmp/dir. This hangs for quite a long time, 15-20 minutes. The devices are each 2TB.

When I look in dmesg I see the below messages. While dmesg is hanging, no additional output shows up in dmesg. I am at a loss as to what else to check to see what is causing this hang.

[ 3316.093665] blkfront: xvdf: barrier or flush: disabled; persistent grants: enabled; indirect descriptors: enabled;  [ 3316.124269]  xvdf: unknown partition table  [ 3316.430563] btrfs: device label BtrFcdData devid 1 transid 592370 /dev/xvdf  [ 3316.432660] btrfs: device label BtrFcdData devid 1 transid 592370 /dev/xvdf  [ 3317.067647] blkfront: xvdg: barrier or flush: disabled; persistent grants: enabled; indirect descriptors: enabled;  [ 3317.111985]  xvdg: unknown partition table  [ 3317.450861] btrfs: device label BtrFcdData devid 2 transid 592370 /dev/xvdg  [ 3317.453036] btrfs: device label BtrFcdData devid 2 transid 592370 /dev/xvdg  [ 3318.186270] blkfront: xvdh: barrier or flush: disabled; persistent grants: enabled; indirect descriptors: enabled;  [ 3318.232822]  xvdh: unknown partition table  [ 3319.025208] btrfs: device label BtrFcdData devid 3 transid 592370 /dev/xvdh  [ 3319.027380] btrfs: device label BtrFcdData devid 3 transid 592370 /dev/xvdh  [ 3320.067903] blkfront: xvdi: barrier or flush: disabled; persistent grants: enabled; indirect descriptors: enabled;  [ 3320.123265]  xvdi: unknown partition table  [ 3320.495547] btrfs: device label BtrFcdData devid 4 transid 592370 /dev/xvdi  [ 3320.497922] btrfs: device label BtrFcdData devid 4 transid 592370 /dev/xvdi  [ 3321.436803] btrfs: device label BtrFcdData devid 1 transid 592370 /dev/xvdf  [ 3321.437252] btrfs: device label BtrFcdData devid 2 transid 592370 /dev/xvdg  [ 3321.437641] btrfs: device label BtrFcdData devid 3 transid 592370 /dev/xvdh  [ 3321.438040] btrfs: device label BtrFcdData devid 4 transid 592370 /dev/xvdi  [ 3457.266257] blkfront: xvdf: barrier or flush: disabled; persistent grants: enabled; indirect descriptors: enabled;  [ 3457.315871]  xvdf: unknown partition table  [ 3457.862287] btrfs: device label BtrFcdData devid 1 transid 592370 /dev/xvdf  [ 3457.864513] btrfs: device label BtrFcdData devid 1 transid 592370 /dev/xvdf  [ 3458.413241] blkfront: xvdg: barrier or flush: disabled; persistent grants: enabled; indirect descriptors: enabled;  [ 3458.519586]  xvdg: unknown partition table  [ 3458.941200] btrfs: device label BtrFcdData devid 2 transid 592370 /dev/xvdg  [ 3458.943531] btrfs: device label BtrFcdData devid 2 transid 592370 /dev/xvdg  [ 3460.121281] blkfront: xvdh: barrier or flush: disabled; persistent grants: enabled; indirect descriptors: enabled;  [ 3460.198695]  xvdh: unknown partition table  [ 3460.805858] btrfs: device label BtrFcdData devid 3 transid 592370 /dev/xvdh  [ 3460.808173] btrfs: device label BtrFcdData devid 3 transid 592370 /dev/xvdh  [ 3461.146042] blkfront: xvdi: barrier or flush: disabled; persistent grants: enabled; indirect descriptors: enabled;  [ 3461.271872]  xvdi: unknown partition table  [ 3462.355409] btrfs: device label BtrFcdData devid 4 transid 592370 /dev/xvdi  [ 3462.357822] btrfs: device label BtrFcdData devid 4 transid 592370 /dev/xvdi  [ 3463.185250] btrfs: device label BtrFcdData devid 1 transid 592370 /dev/xvdf  [ 3463.185669] btrfs: device label BtrFcdData devid 2 transid 592370 /dev/xvdg  [ 3463.186098] btrfs: device label BtrFcdData devid 3 transid 592370 /dev/xvdh  [ 3463.186471] btrfs: device label BtrFcdData devid 4 transid 592370 /dev/xvdi  [ 3463.217317] btrfs: device label BtrFcdData devid 4 transid 592370 /dev/xvdi  [ 3463.219230] btrfs: disabling disk space caching  [ 3463.219233] btrfs: force clearing of disk cache  [ 4930.891445] btrfs: device fsid e512929a-72e4-4bf9-bf1d-c4744bb9cb06 devid 2 transid 27031 /dev/xvdb  [ 4930.893685] btrfs: device fsid e512929a-72e4-4bf9-bf1d-c4744bb9cb06 devid 2 transid 27031 /dev/xvdb  [ 4930.949380] btrfs: device fsid e512929a-72e4-4bf9-bf1d-c4744bb9cb06 devid 1 transid 27031 /dev/xvdc  [ 4930.951209] btrfs: device fsid e512929a-72e4-4bf9-bf1d-c4744bb9cb06 devid 1 transid 27031 /dev/xvdc  [ 4937.133509] btrfs: device fsid e512929a-72e4-4bf9-bf1d-c4744bb9cb06 devid 2 transid 27031 /dev/xvdb  [ 4937.135297] btrfs: device fsid e512929a-72e4-4bf9-bf1d-c4744bb9cb06 devid 2 transid 27031 /dev/xvdb  [ 4937.135757] btrfs: device fsid e512929a-72e4-4bf9-bf1d-c4744bb9cb06 devid 1 transid 27031 /dev/xvdc  [ 4937.137575] btrfs: device fsid e512929a-72e4-4bf9-bf1d-c4744bb9cb06 devid 1 transid 27031 /dev/xvdc  

Edit:

Running watch iostat -dk 2 -x 1 shows some different numbers, at first it was showing a high utilization on the first two devices, and then shows this:

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util  xvda              0.01     1.49    0.87    2.99    22.34   274.17   153.71     1.62  419.08    5.06  540.15   3.91   1.51  xvdd              0.00     0.00    0.02    0.00     0.07     0.00     7.91     0.00    0.41    0.41    0.00   0.41   0.00  xvdb              0.00     0.10    1.53    1.59    23.88    31.80    35.69     0.16   52.79    5.00   98.68   2.80   0.87  xvdc              0.00     0.08    1.58    1.37    24.99    27.43    35.49     0.25   84.70    5.21  176.74   3.45   1.02  xvdf              0.00     2.11    0.48   13.36     7.44  1325.75   192.64     2.25  162.64    5.33  168.27   2.18   3.01  xvdt              0.00     0.00    0.01    0.00     0.08     0.00    31.25     0.00   77.77   77.77    0.00  72.35   0.04  xvdy              0.00     0.00    0.00    0.00     0.03     0.00    30.32     0.00  102.43  102.43    0.00 101.05   0.02  xvdu              0.00     0.00    0.00    0.00     0.00     0.00    20.96     0.00   75.64   75.64    0.00  75.64   0.00  xvdo              0.00     0.00    0.00    0.00     0.01     0.00    16.84     0.00   38.53   38.53    0.00  38.53   0.00  

This is a snippet of the syslog:

Nov 23 17:35:01 ip-10-0-1-123 CRON[11806]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1)  Nov 23 17:38:28 ip-10-0-1-123 kernel: [605946.625013] blkfront: xvdt: barrier or flush: disabled; persistent grants: enabled; indirect descriptors: enabled;  Nov 23 17:38:28 ip-10-0-1-123 kernel: [605946.732903]  xvdt: unknown partition table  Nov 23 17:38:28 ip-10-0-1-123 kernel: [605947.017249] btrfs: device label BtrFcdData devid 1 transid 616951 /dev/xvdt  Nov 23 17:38:29 ip-10-0-1-123 kernel: [605947.036501] btrfs: device label BtrFcdData devid 1 transid 616951 /dev/xvdt  Nov 23 17:38:30 ip-10-0-1-123 kernel: [605948.575945] blkfront: xvdy: barrier or flush: disabled; persistent grants: enabled; indirect descriptors: enabled;  Nov 23 17:38:30 ip-10-0-1-123 kernel: [605948.624099]  xvdy: unknown partition table  Nov 23 17:38:30 ip-10-0-1-123 kernel: [605948.944109] btrfs: device label BtrFcdData devid 2 transid 616951 /dev/xvdy  Nov 23 17:38:30 ip-10-0-1-123 kernel: [605948.946964] btrfs: device label BtrFcdData devid 2 transid 616951 /dev/xvdy  Nov 23 17:38:31 ip-10-0-1-123 kernel: [605949.783777] blkfront: xvdu: barrier or flush: disabled; persistent grants: enabled; indirect descriptors: enabled;  Nov 23 17:38:31 ip-10-0-1-123 kernel: [605949.839874]  xvdu: unknown partition table  Nov 23 17:38:32 ip-10-0-1-123 kernel: [605950.260981] btrfs: device label BtrFcdData devid 3 transid 616951 /dev/xvdu  Nov 23 17:38:32 ip-10-0-1-123 kernel: [605950.263252] btrfs: device label BtrFcdData devid 3 transid 616951 /dev/xvdu  Nov 23 17:38:33 ip-10-0-1-123 kernel: [605951.590060] blkfront: xvdo: barrier or flush: disabled; persistent grants: enabled; indirect descriptors: enabled;  Nov 23 17:38:33 ip-10-0-1-123 kernel: [605951.662633]  xvdo: unknown partition table  Nov 23 17:38:33 ip-10-0-1-123 kernel: [605951.950477] btrfs: device label BtrFcdData devid 4 transid 616951 /dev/xvdo  Nov 23 17:38:33 ip-10-0-1-123 kernel: [605951.953476] btrfs: device label BtrFcdData devid 4 transid 616951 /dev/xvdo  Nov 23 17:38:35 ip-10-0-1-123 kernel: [605953.098657] btrfs: device label BtrFcdData devid 1 transid 616951 /dev/xvdt  Nov 23 17:38:35 ip-10-0-1-123 kernel: [605953.099002] btrfs: device label BtrFcdData devid 2 transid 616951 /dev/xvdy  Nov 23 17:38:35 ip-10-0-1-123 kernel: [605953.099292] btrfs: device label BtrFcdData devid 3 transid 616951 /dev/xvdu  Nov 23 17:38:35 ip-10-0-1-123 kernel: [605953.099723] btrfs: device label BtrFcdData devid 4 transid 616951 /dev/xvdo  Nov 23 17:38:35 ip-10-0-1-123 kernel: [605953.135590] btrfs: device label BtrFcdData devid 4 transid 616951 /dev/xvdo  Nov 23 17:38:35 ip-10-0-1-123 kernel: [605953.137944] btrfs: disabling disk space caching  Nov 23 17:38:35 ip-10-0-1-123 kernel: [605953.137948] btrfs: force clearing of disk cache  Nov 23 17:38:36 ip-10-0-1-123 kernel: [605954.947400] btrfs: device label BtrFcdData devid 4 transid 616951 /dev/xvdo  Nov 23 17:41:22 ip-10-0-1-123 kernel: [606120.868106] INFO: task mount:12477 blocked for more than 120 seconds.  Nov 23 17:41:22 ip-10-0-1-123 kernel: [606120.871838]       Not tainted 3.13.0-92-generic #139-Ubuntu  Nov 23 17:41:22 ip-10-0-1-123 kernel: [606120.874879] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.  Nov 23 17:41:22 ip-10-0-1-123 kernel: [606120.878538] mount           D ffff88020fc13180     0 12477   7218 0x00000000  Nov 23 17:41:22 ip-10-0-1-123 kernel: [606120.878543]  ffff88016d32bc18 0000000000000086 ffff88009bb8e000 0000000000013180  Nov 23 17:41:22 ip-10-0-1-123 kernel: [606120.878547]  ffff88016d32bfd8 0000000000013180 ffff88009bb8e000 ffff88009bb8e000  Nov 23 17:41:22 ip-10-0-1-123 kernel: [606120.878550]  ffff8800776ca870 ffff8800776ca878 ffffffff00000000 ffff8800776ca880  Nov 23 17:41:22 ip-10-0-1-123 kernel: [606120.878553] Call Trace:  Nov 23 17:41:22 ip-10-0-1-123 kernel: [606120.878562]  [<ffffffff8172e2e9>] schedule+0x29/0x70  Nov 23 17:41:22 ip-10-0-1-123 kernel: [606120.878566]  [<ffffffff81730f05>] rwsem_down_write_failed+0x115/0x230  Nov 23 17:41:22 ip-10-0-1-123 kernel: [606120.878571]  [<ffffffff811dd0eb>] ? iput+0x3b/0x180  Nov 23 17:41:22 ip-10-0-1-123 kernel: [606120.878579]  [<ffffffffa00b0000>] ? 0xffffffffa00affff  Nov 23 17:41:22 ip-10-0-1-123 kernel: [606120.878584]  [<ffffffff81377da3>] call_rwsem_down_write_failed+0x13/0x20  Nov 23 17:41:22 ip-10-0-1-123 kernel: [606120.878602]  [<ffffffffa00b51fa>] ? btrfs_mount+0x2ea/0x800 [btrfs]  Nov 23 17:41:22 ip-10-0-1-123 kernel: [606120.878605]  [<ffffffff8173093d>] ? down_write+0x2d/0x30  Nov 23 17:41:22 ip-10-0-1-123 kernel: [606120.878609]  [<ffffffff811c3c7e>] grab_super+0x2e/0xa0  Nov 23 17:41:22 ip-10-0-1-123 kernel: [606120.878611]  [<ffffffff811c42f0>] sget+0x2d0/0x400  Nov 23 17:41:22 ip-10-0-1-123 kernel: [606120.878621]  [<ffffffffa00b29a0>] ? btrfs_parse_early_options+0x1b0/0x1b0 [btrfs]  Nov 23 17:41:22 ip-10-0-1-123 kernel: [606120.878630]  [<ffffffffa00b5328>] btrfs_mount+0x418/0x800 [btrfs]  Nov 23 17:41:22 ip-10-0-1-123 kernel: [606120.878634]  [<ffffffff811735aa>] ? pcpu_alloc+0x6ca/0xa50  Nov 23 17:41:22 ip-10-0-1-123 kernel: [606120.878637]  [<ffffffff8119ad33>] ? alloc_pages_current+0xa3/0x160  Nov 23 17:41:22 ip-10-0-1-123 kernel: [606120.878641]  [<ffffffff811c5149>] mount_fs+0x39/0x1b0  Nov 23 17:41:22 ip-10-0-1-123 kernel: [606120.878644]  [<ffffffff811e0917>] vfs_kern_mount+0x67/0x110  Nov 23 17:41:22 ip-10-0-1-123 kernel: [606120.878648]  [<ffffffff811e31ef>] do_mount+0x25f/0xcd0  Nov 23 17:41:22 ip-10-0-1-123 kernel: [606120.878652]  [<ffffffff81156dae>] ? __get_free_pages+0xe/0x50  Nov 23 17:41:22 ip-10-0-1-123 kernel: [606120.878655]  [<ffffffff811e2e16>] ? copy_mount_options+0x36/0x170  Nov 23 17:41:22 ip-10-0-1-123 kernel: [606120.878658]  [<ffffffff811e3f53>] SyS_mount+0x83/0xc0  Nov 23 17:41:22 ip-10-0-1-123 kernel: [606120.878662]  [<ffffffff8173a9dd>] system_call_fastpath+0x1a/0x1f  

Nginx - serving static files

Posted: 15 May 2021 05:09 PM PDT

What am I doing wrong here?

This works:

location / {      alias /var/www/static/;      try_files $uri index.html =404;  }  

But this doesn't:

location /hello {      alias /var/www/static/;      try_files $uri index.html =404;  }  

Here is the error I'm getting:

[error] 14428#0: *1 open() "/usr/share/nginx/html/bundle.js" failed (2: No such file or directory)  

I know that Nginx is looking for bundle.js from the wrong directory. It should be located in the /var/www/static/ folder.

I can fix that by adding

root /var/www/static/  

in the beginning of my config but then if I add another location the same problem re-emerges.

location /world {      alias /var/www/another/;      try_files $uri index.html =404;  }  

Error:

[error] 14827#0: *1 open() "/var/www/static/bundle2.js" failed (2: No such file or directory)  

bundle2.js should be in the /var/www/another/ folder but because I defined root as /var/www/static/ Nginx is looking for bundle2.js from the wrong folder.

Here is the whole config:

server {      listen 80;        root /var/www/static/;        location /hello {          alias /var/www/static/;          try_files $uri index.html =404;      }        location /world {          alias /var/www/another/;          try_files $uri index.html =404;      }  }  

Linux Hostapd registers as WEP, but I want WPA2-PSK

Posted: 15 May 2021 10:00 PM PDT

as usual to respect the community, I have traversed a bajillion articles and re-read the manual and still can't seem to get past this issue.

I'm trying to create an access point in Linux(linaro on a dragonboard 410C) with hostapd to register as WPA2-PSK, but when I do a wifi scan, it keeps showing WEP.

My config looks like the following

ssid=TestAccessPoint  interface=wlan0  hw_mode=g  ignore_broadcast_ssid=0  macaddr_acl=0  auth_algs=1  wpa=2  wpa_passphrase=omgiloveunicorn5  wpa_key_mgmt=WPA-PSK  wpa_pairwise=TKIP  rsn_pairwise=CCMP  

I fire off the hostapd, the daemon appears to launch successfully, but it keeps reporting as WEP instead of WPA2-PSK. If I try to manually create the connection, nothing happens.

When I also try to connect to the access point from any device, it reports the following error

wlan0: STA blah blah blah: authenticated  wlan0: STA blah blah blah: No WPA/RSN IE in association request  

Please help!

POSTFIX: Limit outbound connections, message count, cummulative size in time window

Posted: 15 May 2021 04:02 PM PDT

We are sending quite large amount of email messages to our customers every month with Postfix. Big percentage of them have their email hosted with @example.com. But every month we are struggling to deliver all the messages because of greylisting.

Example.com provided us with specific settings for every 5 minutes window to stay out of greylist:

Max. cumulative limits for every 5 minutes to domain example.com

  • 300 connections
  • 100 000 messages
  • total size of 180 MB

We are definitely not sending 100K messages so number of connections over this 5 minutes window seems to be the main issue. Could anyone help with specific Postfix settings to meet the criteria above only for connections to example.com?

Asterisk Cancel Transfer (atxfer)

Posted: 15 May 2021 07:01 PM PDT

I'm using the Asterisk 13 AMI to start a atxfer. It works so far. But how can I cancel the transfer action?

Example: Bob calls Alice. Then after some time talking, Alice starts an automated transfer to Charles. Charles is not interested and but will not hang up, because of any reason. How can Bob now hangup Charles and get Alice back?

If I'm using the disconnect feature (and the H dialplan option), Bob hangs up the call with Charles but does not get Alice back. Bob then has an ongoing silent call and Alice still listens to the MOH. If one of both hangs up, the other call hangs up automatically.

Which feature or feature code is needed to get Alice back? Am I doing it right?

Php-fpm process always running

Posted: 15 May 2021 08:01 PM PDT

I am using Nginx with Php-fpm (php 5.5 + opcode) for a very high traffic web site, php-fpm status endpoint to monitoring what going on. After a big traffic peak, 200 "active process" are still running despite no more traffic.

Here my php-fpm pool config:

pm.max_children = 1024  pm.start_servers = 32  pm.min_spare_servers = 32  pm.max_spare_servers = 64  pm.max_requests = 500  

Here the result of php-fpm status for a "always running" process:

{  pid: 24223,  state: "Running",  start time: 1415881336,  start since: 1307629,  requests: 186,  request duration: 1306169216849,  request method: "GET",  request uri: "/index.php?loca.....",  content length: 0,  user: "-",  script: "/home/ebuildy/app/index.php",  last request cpu: 0,  last request memory: 0  },  

This process should be "killed" to be in idle state isnit?

How does one cause a Megaraid controller to "re-scan" the devices?

Posted: 15 May 2021 08:01 PM PDT

We are using a Supermicro Megaraid card (2208) and we are trying to perform some SCSI operations directly on the drives using the passthrough ioctl. One issue that we have run into is that when we change something on the drive (say it's visible capacity), the megaraid controller does not "see" the change and tries to access the drive as if it still has the original capacity.

Physically pulling the drive out, waiting for a few seconds and returning it - solves the problem.

It seems that if we had a way to tell the Megaraid controller to "re-scan" the device we would be all set, the problem is that we don't know how to do it. Does anyone know how to do it?

Thanks in advance

How to tell rsync do not check permissions

Posted: 15 May 2021 09:17 PM PDT

I have two directories with same PHP application in every of them. I want to execute rsync -rvz from one, source directory to another, destination, so rsync will copy changed files only. Problem is that files in the source directory has 755 permissions, by mistake. Permissions in destination are fine.

How can I tell to rsync ignore permission checkings and check by size only? Thanks.

Using several rewrite rules .htaccess

Posted: 15 May 2021 04:02 PM PDT

I have a website wich is based on a searchpage, which shows a list, and next is a detail page of the items.

if visitors use rootdomain.com/searchterm, should be redirected to rootdomain.com/index.php?keyword=searchterm.

I had that working now for the details rootdomain.com/detail/productname, should be redirected to rootdomain.com/detail.php?name=productname

and here I fail...

I keep getting redirected to index.php with my $1 as 'detail/productname'

this is my .htacces so far:

RewriteEngine on  RewriteCond $1 !^(index\.php|resources|robots\.txt)  RewriteCond %{REQUEST_FILENAME} !-f  RewriteCond %{REQUEST_FILENAME} !-d  RewriteRule ^/detail/(.*)$  detail.php?name=$1 [L,QSA] -> L here doesn't work either.   RewriteRule ^(.*)$ index.php?keyword=$1 [L,QSA]  

Update

Ok, so I am very close now

Problem was that the rewrite condition only is valid for the next rule so I have to repeat that apparantly, I also added ignore detail.php in the first condition, because my redirect on detail got redirected again to index. now for some reason my condition for -f doesn't seem to work on detail/ all my css and images are not working. what is best way to exclude them

RewriteEngine on  RewriteCond $1 !^(index\.php|detail\.php|resources|robots\.txt)  RewriteCond %{REQUEST_FILENAME} !-f  RewriteCond %{REQUEST_FILENAME} !-d  RewriteRule ^detail/(.*)$  detail.php?name=$1 [L,QSA]     RewriteCond $1 !^(index\.php|detail\.php|resources|robots\.txt)  RewriteCond %{REQUEST_FILENAME} !-f  RewriteCond %{REQUEST_FILENAME} !-d  RewriteRule ^(.*)$ index.php?keyword=$1 [L,QSA]  

Update 2

So I got it to work.. But I guess this is not the shortest way to do it. So if anyone is interested feel free to make this shorter.

RewriteEngine on  RewriteCond $1 !^(index\.php|detail\.php|resources|robots\.txt)  RewriteCond %{REQUEST_FILENAME} !-f  RewriteCond %{REQUEST_FILENAME} !-d  RewriteRule ^detail/(images|css|js)/(.*)$  $1/$2 [L,QSA]     RewriteCond $1 !^(index\.php|detail\.php|resources|robots\.txt)  RewriteCond %{REQUEST_FILENAME} !-f  RewriteCond %{REQUEST_FILENAME} !-d  RewriteRule ^detail/(.*)$  detail.php?name=$1 [L,QSA]     RewriteCond $1 !^(index\.php|detail\.php|resources|robots\.txt|\.png)  RewriteCond %{REQUEST_FILENAME} !-f  RewriteCond %{REQUEST_FILENAME} !-d  RewriteRule ^(.*)$ index.php?keyword=$1 [L,QSA]  

How can I determine if a machine is online without using ping?

Posted: 15 May 2021 06:04 PM PDT

I used to use an application that could ping or maybe run a port scan on a machine even if the machine was configured to not allow it.

I am currently trying to ping a remote machine on our WAN, but I have configured the machine to not allow ping. Is there something similar to ping that I can use?

Again, this a machine located in another city that is part of our wan.

No comments:

Post a Comment