Saturday, June 26, 2021

Recent Questions - Server Fault

Recent Questions - Server Fault


MSA 2012i Controllers Failing

Posted: 26 Jun 2021 11:16 PM PDT

Good Day!

I am the lucky owner of an old MSA2012i. And I have some problems with firmware J212P01(A) for my MSA2012i! I installed the firmware J212PO1(A) on MSA2012i, but numerous errors appeared, physical and logical disks were missing... Everything was fine after reboot, but this morning... MISTAKE ON MY DISPLAY:

enter image description here

"The controller you have connected to cannot communicate between its management controller and main storage controller"

Both controllers (A and B) do not answer...

In SMU this message:

enter image description here

But when I remove the controller A (or B) from MSA, remaining controller B (or A) works fine...

enter image description here

But both (A+B) controllers don't work togather... Version on both controllers = J212PO1(A). Which setting will help me?

Please!

It's important for me...

Catherine.

How to do a canary upgrade to existing istio customised setup?

Posted: 26 Jun 2021 10:33 PM PDT

How to do a canary upgrade to existing istio customised setup.

Requirement:

  • We have existing customised setup of istio 1.7.3 (installed using istoctl method and no revision set for this) for AKS 1.18.14.
  • Now we need to upgrade to istio 1.8 with no downtime or minimal.
  • The upgrade should be safer and it wont break our prod environemnt in any ways.

How we installed the current istio customised environment:

1) created manifest.       istioctl manifest generate --set profile=default -f /manifests/overlay/overlay.yaml > $HOME/generated-manifest.yaml  2) installed istio.       istioctl install --set profile=default -f /manifests/overlay/overlay.yaml  3) Verified istio against the deployed manifest.       istioctl verify-install -f $HOME/generated-manifest.yaml        

Planned upgrade process Reference

1) Precheck for upgrade.       istioctl x precheck    2) export the current used configuration of istio using below command to a yaml file.       kubectl -n istio-system get iop installed-state-install -o yaml > /tmp/iop.yaml    3) Download istio 1.8 binary and extract the directory and navigate the directory to where we have the 1.8 version istioctl binary.        cd istio1.8\istioctl1.8    4) from the new version istio directory, create a new controlplane for istio(1.8) with proper revision set and use the previously exported installed-state "iop.yaml".       ./istioctl1.8 install --set revision=1-8 --set profile=default -f /tmp/iop.yaml         Expect that it will create new control plane with existing costamised configuration and now we will have two control plane deployments and services running side-by-side:          $ kubectl get pods -n istio-system -l app=istiod            NAME                                    READY   STATUS    RESTARTS   AGE            istiod-786779888b-p9s5n                 1/1     Running   0          114m            istiod-1-7-6956db645c-vwhsk             1/1     Running   0          1m                                     5) After this, we need to change the existing label of all our cluster namespaces where we need to inject the istio proxy containers. Need to remove the old "istio-injection" label, and add the istio.io/rev label to point to the canary revision 1-8.      $ kubectl label namespace test-ns istio-injection- istio.io/rev=1-8                                 Hope, at this point also the environment is stable with old istio configurations and we can make decision on which app pods can be restarted to make the new control plane changes as per our downtime, and its allowed to run some apps with older control plane and another with new controller plane configs t this point.      eg:  kubectl rollout restart deployment -n test-ns  (first)            kubectl rollout restart deployment -n test-ns2 ( later)           kubectl rollout restart deployment -n test-ns3  (again after sometieme later)    6) Once we planed for downtime and restarted the deployments as we decided, confirm all the pods are now using dataplane proxy injector of version 1.8 only              kubectl get pods -n test-ns -l istio.io/rev=1-8                           7) To verify that the new pods in the test-ns namespace are using the istiod-canary service corresponding to the canary revision                  istioctl proxy-status | grep ${pod_name} | awk '{print $7}'    8)  After upgrading both the control plane and data plane, can uninstall the old control plane          istioctl x uninstall -f /tmp/iop.yaml.                                 

Need to clear below points before upgrade.

  1. Are all the steps prepared for the upgrade above are good to proceed for highly used Prod environment ?
  2. By exporting the installed state iop is enough to get all customised step to proceed the canary upgrade? or is there any chance of braking the upgrade or missing any settings?
  3. Whether the step 4 above will create the 1.8 istio control plane with all the customization as we already have without any break or missing something ?
  4. after the step 4, do we need to any any extra configuration related to istiod service configuration> the followed document is not clear about that,
  5. for the step 5 above, how we can identy all the namespaces, where we have the istio-injection enabled and only modify those namespace alone and leave others as it was before?
  6. so for the step 8 above, how to ensure we are uninstalling old control plane only ? we have to get the binary for old controlplane say (1.7 in my case)and use that binary with same exported /tmp/iop.yaml ?
  7. No Idea about how to rollback any issues happened in between.. before or after the old controlplane deleted

OpenStack Swift /info API requires authentication

Posted: 26 Jun 2021 10:34 PM PDT

I have a Kolla-Ansible cloud that includes Swift. Swift does not accept the /info API without authentication:

$ curl 192.168.122.201:8080/info  {"error": {"code": 401, "title": "Unauthorized", "message": "The request you have made requires authentication."}}  

It works when I provide a valid token.

$ curl -H "x-auth-token: $T" 192.168.122.201:8080/info  {"swift": {"version": "2.26.0", "strict_cors_mode": true, "policies": ...  

expose_info = true is set by default, but to be certain, I set it explicitly in proxy-server.conf. This should make it unnecessary to authenticate.

An obvious workaround would be to authenticate. Unfortunately, Cinder-Backup uses Swift and fails starting up because it can perform the /info API. I have not found out if and how it is possible to force Cinder-Backup to authenticate.

What could cause this unexpected behaviour? How can I troubleshoot this problem?

Is there a program to diagnose and summarize why my mchine is slow?

Posted: 26 Jun 2021 10:01 PM PDT

Is there a program that will tell me why my server is slow? What is being over utilized? CPU / Memory / Disk / Network ... and what processes are driving that utilization? I want a program that can check and explain it to me. (I don't want to figure it out myself.) Maybe it could even suggest server tuning changes.

Redirecting Users to Specific Files With Cookies

Posted: 26 Jun 2021 08:21 PM PDT

Might be a dumb question, but I hope someone can explain it to me in a simple way for a beginner to understand.

Say I had a file only logged in users to my WordPress site are allowed to access. If I use a variable in nginx to redirect logged in users to that file based on the presence of the logged in cookie, such as:

if ($http_cookie ~* "(wordpress_logged_in_)") {  return blah  }  

Would that be a proper, safe, or acceptable use or no?

Clear adpalilog intel raid controller

Posted: 26 Jun 2021 08:35 PM PDT

When i check battery by command line tool

CmdTool264 -adpalilog -aAll | grep -i "Battery has fail"  

Result show many old log. Can i delete it & how to delete? I want only show log at real-time check! Many thanks!

I want to increase the KVM root file system from 60GB to 160 GB

Posted: 26 Jun 2021 07:18 PM PDT

The KVM image file 3.1GB in the file system on the hypervisor. The file systems are only showing up at 60 GB with df -kh total space. I want to increase the KVM volume space to 120 GB. I tried 'virsh blockresize'. It says its unsupported?

file format: qcow2 virtual size: 160G (171798691840 bytes) disk size: 3.0G cluster_size: 65536 Format specific information: compat: 1.1 lazy refcounts: true refcount bits: 16 corrupt: false

How do I increase the KVM volume space size so I can increase root and allocate other logical volumes/file systems.

Conditional port forwarding with ufw as a default policy

Posted: 26 Jun 2021 06:45 PM PDT

Is there a way such that: when a connection is denied (by the rule set) in ufw, it forwards that traffic to another port on the local machine, rather than dropping (by default)?

I can see two potential ways for port forwarding in ufw, I am wondering how to modify these so that it is conditional.

  1. Adding -A PREROUTING rule to /etc/ufw/before.rules. But I need that rule only to be applied when the connection should not be allowed (as defined in the ufw rules). In other words, as the default rule (to forward instead of block).

  2. sudo ufw route, but how to apply that route only for denied connections? The example I see does not have a condition set. (set that as a catch-all default policy). Is it possible to add route as the default rule for connections?

SSH access with SSH reverse tunnel

Posted: 26 Jun 2021 04:31 PM PDT

I can find a lot tutorials on the web for setting up an reverse SSH tunnel.

  ssh -p2000 -fNC -R 10011:localhost:22.user@proxy.de  

But how I can become an SSH connection on my local server? I like to set up a connection from proxy(has a public IP) to localhost(which is in my home network) through the SSH reverse tunnel . I need to type from anywhere SSH commands on my localhost.

Thanks for your help Stefan

hp laptop fix network settings

Posted: 26 Jun 2021 04:12 PM PDT

My microsoft account on my hp i noticed there is an active xbox account and profile I did not create or have ever seen before. I dont own nor ever have owned in my life. I dissabled much of the alarming permissions but cant delete the account or remove it alltogether off the computer. Could someone have accessed my microsoft account and created the xbox account? could someone remotely access my network with the xbox account and have been spying/ stealing my info/data?

inventory of computer names, MAC and network interface with a GPO startup script

Posted: 26 Jun 2021 06:49 PM PDT

I want to compile a list computer names, MAC addresses and computer names on my network. The idea is to collect these informations with a startup script on the client called from a GPO in my Windows server domain. The desired format should be a plain text file on a Windows server file share, for example \Server\technic\inventory.txt

Format of a data line in inventory.txt:

<date> <time> <computer name/ hostname> <MAC physical address> <network interface description>  

All clients are Windows 10 clients. New or updated records should be appended to the file Inventory.txt

If there is more than one MAC address per computer (for example LAN and WLAN) then I would like to have a separate line for each MAC.

How do I achieve this with a command line script? The script runs with admin rights, access to the \server\technic\inventory.txt is guaranteed. My requested information is in the output of the command "ipconfig / all", but how can you extract the information from it to get the desired output format?

It may be better to access the information in a different way than via "ipconfig / all", because the output of ipconfig is language-dependent. Language independent solutions are probably better. Since I can also start Powershell scripts from the command line, solutions in Powershell are also welcome.

RouterOS PPPoE without its own LAN

Posted: 26 Jun 2021 06:28 PM PDT

I currently have a Billion router connected via PPPoE to an Australian ISP. The Billion router provides its own LAN, and then goes into an Eero mesh network, which also provides its own LAN.

I would like the Eero to connect to the WAN directly, so there is one LAN. However, Eeros do not yet support PPPoE.

For this I am looking into whether the MikroTik Hex which runs RouterOS can replace the Billion router, as to authenticate the PPPoE but not to setup a LAN, such that the Eero receives the authenticated internet and can connect to the WAN itself.

So far I have found this, however, it seems that guide is based on the assumption the Hex will be setting up its own LAN.

Windows Server 2019 IIS SMTP

Posted: 26 Jun 2021 08:31 PM PDT

I've added the SMTP feature to my Windows Server as every tutorial on the Internet does and restarted IIS. But I can't see the SMTP virtual server under my sites:

Image of the issue here

What's wrong? Is there any problem with SMTP and Windows Server 2019?

More general question: How can I set up a mail server on my Windows Server 2019?

How to redirect from one subfolder to a subsubfolder with htaccess

Posted: 26 Jun 2021 08:07 PM PDT

I have this folder structure:

/fonts    /myfont.eot    /myfont.svg    /myfont.ttf    /myfont.woff    /myfont.woff2  /content    /page1      /files        /logo.png        /style.css      /index.html    /page2      /files        /logo.png        /style.css      /index.html    /page3      /files        /logo.png        /style.css      /a        /index.html      /b        /index.html    ...  

The URLs one would call look like this:

  • example.com/content/page1
  • example.com/content/page2
  • example.com/content/page3/a
  • example.com/content/page3/b

Now all I want to achieve with an .htaccess file located in /page3 is that whoever visits example.com/content/page3 is properly redirect to example.com/content/page3/a (or example.com/content/page3/a/index.html, I don't mind whether the file name is in the URL or not).

I tried

DirectoryIndex /content/page3/a/index.html  

but in this case when I open example.com/content/page3 all relative references in the /a/index.html file are broken because of the missing directory level in the URL. Furthermore, while calling example.com/content/page3/a works, example.com/content/page3/b gives 403 Forbidden.

I tried

Redirect 301 /content/page3 /content/page3/a  

but this obviously results in an endless redirect spiral to example.com/content/page3/a/a/a/a/a/a/...... until the server stops trying.

So I figured I need some RedirectCond and RedirectRule configuration. Unfortunately, I don't understand the syntax, and all examples I looked at are doing it on the top-level with more complex stuff like redirecting files and sub-folders, sometimes off to another domain etc.

I tried this

RewriteEngine On    RewriteCond %{HTTP_HOST} ^(www\.)?example\.com$  RewriteCond %{REQUEST_URI} ^/content/page3/$  RewriteRule ^/content/page3/?$ /content/page3/a [L]  

because I figured this would replace "/content/page3" with "/content/page3/a", but to no avail, it doesn't do anything.

I now went with using

DirectoryIndex /content/page3/a/index.html index.html  

and replaced the relative references in the document with absolute ones. This works.

But firstly I would still prefer if the references could remain relative, so the document doesn't break in case the page3 folder is ever renamed, and secondly I'd rather have the /a subdirectory in the URL for clarity as to what is displayed.

How can I achieve this?

Add-NetNatStaticMapping not port forwarding to local VM

Posted: 26 Jun 2021 09:02 PM PDT

I'm running windows 10 build 1809 and have hyper-v installed. I have a Linux machine running behind a NAT with internet connectivity working on IP 10.0.5.5. I basically followed instructions on the link below

https://docs.microsoft.com/en-us/virtualization/hyper-v-on-windows/user-guide/setup-nat-network

When created the port mapping I called

Add-NetNatStaticMapping -ExternalIPAddress 0.0.0.0/24 -ExternalPort 8500 -Protocol TCP -InternalIPAddress 10.0.5.5 -InternalPort 8500 -NatName YetAnotherNAT  

If i try to hit http://10.0.5.5:8500 it works (page loads). If i try to hit http://127.0.0.1:8500 it doesn't work (nothing loads). Even if I try to use any of my external IPs, it doesn't work.

It's basically like the whole port forwarding is not doing anything.

Any ideas?

Get-VmSwitch returns the following

PS C:\> Get-VMSwitch    Name             SwitchType NetAdapterInterfaceDescription  ----             ---------- ------------------------------  nat              Internal  Wifi             External   Intel(R) Dual Band Wireless-AC 7265  DockerNAT        Internal  Default Switch   Internal   Teamed-Interface  MyNATSwitch      Internal  YetAnotherSwitch Internal  

Get-NetNat returns the following

PS C:\> get-netnat      Name                             : YetAnotherNAT  ExternalIPInterfaceAddressPrefix :  InternalIPInterfaceAddressPrefix : 10.0.5.0/24  IcmpQueryTimeout                 : 30  TcpEstablishedConnectionTimeout  : 1800  TcpTransientConnectionTimeout    : 120  TcpFilteringBehavior             : AddressDependentFiltering  UdpFilteringBehavior             : AddressDependentFiltering  UdpIdleSessionTimeout            : 120  UdpInboundRefresh                : False  Store                            : Local  Active                           : True  

HAProxy: Run external-check command every 30 seconds

Posted: 26 Jun 2021 11:00 PM PDT

I have HAPproxy configuration with two servers:

listen 10.10.10.10          bind *:1234          mode tcp          option tcplog          balance roundrobin            timeout client  5h          timeout server  5h            option external-check          option log-health-checks          external-check path "/var/lib/haproxy/dev"          external-check command /var/lib/haproxy/dev/testscript.sh          external-check command /bin/true          server nodo1-1 192.168.1.14:1234 check inter 30s fall 1 rise 1          server nodo1-2 192.168.1.15:1234 check inter 30s fall 1 rise 1  

But the command doesn't execute every 30 seconds.

How a subfolder of domain having different IP

Posted: 26 Jun 2021 08:07 PM PDT

We have multiple ecommerce platforms. Say, Magento and Shopware. Magento and Shopware are hosted in different servers. We want to migrate all Magento stores to Shopware.

The domain name of Magento platform is aaa.com [IP Address: 111.111.111.111] and Shopware platform is bbb.com[IP Address: 222.222.222.222]. We want aaa.com/en to be hosted on Shopware platform [IP Address 222.222.222.222]. And all other substores of magento will be still using Magento. We know subdomain of domain can have different IP [en.aaa.com].

We want the subfolder of a domain to be hosted in different server. How to achieve this?

Migrating one substore from one server to another

Symfony's PHP files doesn't work on prod server

Posted: 26 Jun 2021 05:01 PM PDT

I just try to deploy to production a Symfony (3.3.5) project for the first time and I am in a little trouble.

MySQL is installed and running on the server, Symfony can connect to it. Apache2 and PHP are running too. (And simple PHP files like a echo "hello world" work as expected).

I have two points:

First: when I try to access http://example.com/web/app.php the "page isn't working" (this is the Chrome error message) with error code 500. Same thing for any other URL like http://example.com/web/app.php/login.

Second: Why do I still have to use routes with /web/app.php (if I don't use it, the server show me a directory explorer of the web server) whereas my .htaccess file looks like this:

<IfModule mod_rewrite.c>      RewriteEngine On      RewriteCond %{REQUEST_FILENAME} !-f      RewriteRule ^(.*)$ web/$1 [QSA,L]  </IfModule>  

And my /etc/apache2/sites-available/example.com.conf file is:

<VirtualHost *:80>        ServerAdmin mymail@gmail.com      ServerName example.com      ServerAlias dev.gepacte.com      DocumentRoot /var/www/example.com/public_html        ErrorLog ${APACHE_LOG_DIR}/error.log      CustomLog ${APACHE_LOG_DIR}/access.log combined        <Directory "/var/www/example.com/public_html">          <IfModule sapi_apache2.c>              php_admin_flag engine on          </IfModule>          <IfModule mod_php5.c>              php_admin_flag engine on          </IfModule>          #If one of the following lines is uncommented, the apache2 serve won't restart.          #DirectoryIndex app.php          #Options -Indexes          #AllowOverride All          #Allow from All      </Directory>    </VirtualHost>  

Of course, my symfony project is located in /var/www/example.com/public_html

EDIT:

My /var/log/apache2/access.log file:

80.215.95.207 - - [07/Nov/2017:18:22:44 +0100] "GET /web/app.php/ HTTP/1.1" 500 716 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.100 Safari/537.36"

And the /var/log/apache2/error.log file:

[Tue Nov 07 18:24:20.542808 2017] [mpm_prefork:notice] [pid 17564] AH00169: caught SIGTERM, shutting down [Tue Nov 07 18:24:20.723226 2017] [mpm_prefork:notice] [pid 22562] AH00163: Apache/2.4.25 (Debian) configured -- resuming normal operations [Tue Nov 07 18:24:20.723301 2017] [core:notice] [pid 22562] AH00094: Command line: '/usr/sbin/apache2'

ceph osd down and rgw Initialization timeout, failed to initialize after reboot

Posted: 26 Jun 2021 10:04 PM PDT

Centos7.2, Ceph with 3 OSD, 1 MON running on a same node. radosgw and all the daemons are running on the same node, and everything was working fine. After reboot the server, all osd could not communicate (looks like) and the radosgw does not work properly, it's log says:

2016-03-09 17:03:30.916678 7fc71bbce880  0 ceph version 0.94.6 (e832001feaf8c176593e0325c8298e3f16dfb403), process radosgw, pid 24181  2016-03-09 17:08:30.919245 7fc712da8700 -1 Initialization timeout, failed to initialize  

ceph health shows:

HEALTH_WARN 1760 pgs stale; 1760 pgs stuck stale; too many PGs per OSD (1760 > max 300); 2/2 in osds are down  

and ceph osd tree give:

ID WEIGHT  TYPE NAME               UP/DOWN REWEIGHT PRIMARY-AFFINITY  -1 2.01999 root default  -2 1.01999     host app112   0 1.00000         osd.0              down  1.00000          1.00000   1 0.01999         osd.1              down        0          1.00000  -3 1.00000     host node146   2 1.00000         osd.2              down  1.00000          1.00000  

and service ceph status results:

=== mon.app112 ===  mon.app112: running {"version":"0.94.6"}  === osd.0 ===  osd.0: running {"version":"0.94.6"}  === osd.1 ===  osd.1: running {"version":"0.94.6"}  === osd.2 ===  osd.2: running {"version":"0.94.6"}  === osd.0 ===  osd.0: running {"version":"0.94.6"}  === osd.1 ===  osd.1: running {"version":"0.94.6"}  === osd.2 ===  osd.2: running {"version":"0.94.6"}  

and this is service radosgw status:

Redirecting to /bin/systemctl status  radosgw.service  ● ceph-radosgw.service - LSB: radosgw RESTful rados gateway     Loaded: loaded (/etc/rc.d/init.d/ceph-radosgw)     Active: active (exited) since Wed 2016-03-09 17:03:30 CST; 1 day 23h ago       Docs: man:systemd-sysv-generator(8)    Process: 24134 ExecStop=/etc/rc.d/init.d/ceph-radosgw stop (code=exited, status=0/SUCCESS)    Process: 2890 ExecReload=/etc/rc.d/init.d/ceph-radosgw reload (code=exited, status=0/SUCCESS)    Process: 24153 ExecStart=/etc/rc.d/init.d/ceph-radosgw start (code=exited, status=0/SUCCESS)  

Seeing this, I have tried sudo /etc/init.d/ceph -a start osd.1 and stop for a couple of times, but the result is the same as above.

sudo /etc/init.d/ceph -a stop osd.1  === osd.1 ===  Stopping Ceph osd.1 on open-kvm-app92...kill 12688...kill 12688...done    sudo /etc/init.d/ceph -a start osd.1  === osd.1 ===  create-or-move updated item name 'osd.1' weight 0.02 at location {host=open-kvm-app92,root=default} to crush map  Starting Ceph osd.1 on open-kvm-app92...  Running as unit ceph-osd.1.1457684205.040980737.service.  

Please help. thanks

EDIT: it seems like mon cannot talk to osd. but both daemons are running ok. the osd log shows:

2016-03-11 17:35:21.649712 7f003c633700  5 osd.0 234 tick  2016-03-11 17:35:22.649982 7f003c633700  5 osd.0 234 tick  2016-03-11 17:35:23.650262 7f003c633700  5 osd.0 234 tick  2016-03-11 17:35:24.650538 7f003c633700  5 osd.0 234 tick  2016-03-11 17:35:25.650807 7f003c633700  5 osd.0 234 tick  2016-03-11 17:35:25.779693 7f0024c96700  5 osd.0 234 heartbeat: osd_stat(6741 MB used, 9119 MB avail, 15861 MB total, peers []/[] op hist [])  2016-03-11 17:35:26.651059 7f003c633700  5 osd.0 234 tick  2016-03-11 17:35:27.651314 7f003c633700  5 osd.0 234 tick  2016-03-11 17:35:28.080165 7f0024c96700  5 osd.0 234 heartbeat: osd_stat(6741 MB used, 9119 MB avail, 15861 MB total, peers []/[] op hist [])  

Windows Server Backup is doing incremental instead of full backup of Exchange data

Posted: 26 Jun 2021 05:01 PM PDT

I am backing up an Exchange Server database to a backup volume on Windows Server 2012 R2, using Windows Server Backup.

I mostly followed the tutorial shown at http://exchangeserverpro.com/backup-exchange-server-2013-databases-using-windows-server-backup/

I hope to backup data, and also remove old Exchange log files.

The backup is successful, but the log files are not being removed/truncated.

Exchange does not record a full backup in the database settings page. The "Details" panel for the last backup records the last backup as VSS Full backup, successful, but in the "items" list, both C and D are described as "Backup Type": "Incremental".

I cannot find any further settings to control if backup is "Full" or "Incremental" except on the VSS settings, which is set to Full.

Any suggestions?

Nginx & PHP-FPM: Query parameters won't be passed to PHP

Posted: 26 Jun 2021 10:24 PM PDT

I am currently setting up a machine for local development using Vagrant. Everything runs as it should, expect query parameters aren't passed to PHP on subpages.

That means on www.example.com/?a=b, the query parameter is accessible, but on www.example.com/subpage/?a=b it's not.

The general reply I found using Google for this problem is to modify the try_files directive, but that isn't working for me. I've also checked the request_order & variables_order in php.ini – everything is setup correctly there.

This is my config:

 server {       listen                80;       server_name           example.com www.example.com;       root                  /var/www/public;         location / {           index   index.html index.htm index.php;           try_files $uri $uri/ /index.php?$query_string;           include /etc/nginx/fastcgi_params;       }         location ~ \.php$ {           fastcgi_pass 127.0.0.1:9000;           fastcgi_index index.php;           fastcgi_param SCRIPT_FILENAME $request_filename;           include /etc/nginx/fastcgi_params;       }         sendfile off;  }  

Since I don't know much about server setup & administration, I am hitting a brick wall here, still here are a few things I also checked:

  • $query_string is set in /etc/nginx/fastcgi_params as fastcgi_param QUERY_STRING $query_string; which seems correct to me.
  • The path to fastcgi_params is correct

Since it works when not being on a subpage, I am now suspecting the location blocks not matching, but I really don't understand how this could be the case – please help.

Write permissions to multiple users on same directory in ProFTPD

Posted: 26 Jun 2021 06:01 PM PDT

I am quite new to Webmin and ProFTPD and I am trying to give multiple users access on a public_html, both users are in same group:

siteowner:x:504:504::/home/thepclincom secuser.thepnlincom:x:510:504::/home/thepclincom/public_html

Site's ownership is set to:

siteowner siteowner

Please suggest me how would I give these both users write access to public_html?

Thanks

Linux VNC doesnt accepting my password

Posted: 26 Jun 2021 11:00 PM PDT

I set up a tight VNC server. I used this tutorial: https://www.digitalocean.com/community/tutorials/how-to-install-and-configure-vnc-on-ubuntu-14-04

The VPS is hosted on Digitalocean. The VNC server is running. After typing service vncserver start it says:

    root@vpn:~# service vncserver start   * Starting vncserver for user 'demo' on localhost:1...  New 'X' desktop is vpn:1    Starting applications specified in /home/demo/.vnc/xstartup  Log file is /home/demo/.vnc/vpn:1.log    root@vpn:~#  

But after SSH tunneling with putty and logging in with vnc viewer it simply said (so no credentials won´t be entered):

This server does not have a valid password enabled. Until a password is set, incoming connections cannot be accepted.  

User account demo have its own password and password for vncserver is chosen, too. Any solutions ?

EDIT: If I dont use SSH tunneling method, I get

Connection was refused by host computer  

Redirect ssh trafic for one user through another port

Posted: 26 Jun 2021 10:04 PM PDT

Is it possible to have a configuration like this:

  • A server which listen ssh connections on port 22 as usual
  • For one user (let's say git) redirect all the traffic through another port (2222 for instance)

As a result the command ssh git@host will produce the same result as ssh -p 2222 git@host.

Basically I try to have a sort of reverse proxy on ssh but as I know we can't use sub domains to distinguish ssh incoming connection, I was wondering if we can accomplish this kind of thing with an user approach.

Edit:

The reason is I have set up a gitolite server in a Docker container so at the end I have a ssh daemon which listen on the port 2222 for git purpose. Additionally I have a "regular" ssh daemon which listen on the port 22 (and I want keep it).

Of course I can access to the git server through the port 2222 (if I open it from the outside) but I was wondering if I can use the "regular" ssh server from remote and then locally redirect it to the "git" ssh for the user git.

So the traffic will be something like this for the user git:

client <==> 22:server:2222:git_container

"You do not have permission to view this directory or page." on IIS 7.5 with WinServer 2008 R2

Posted: 26 Jun 2021 09:02 PM PDT

I have just uploaded my ASP.NET MVC 4 website through Visual Studio 2012 (FTP method). I login to my Windows Server and can see the files have been uploaded.

I've checked all bindings ... etc. But when I try and visit the website in my browser, all I get is this:

enter image description here

What am I doing wrong? I have checked this thread, and tried everything that is mentioned there, but nothing worked.

Why S3 website redirect location is not followed by CloudFront?

Posted: 26 Jun 2021 10:03 PM PDT

I have a website hosted on Amazon S3. It is the new version of an old website hosted on WordPress.

I have set up some files with the metadata Website Redirect Locationto handle old location and redirect them to the new website pages.

For example: I had http://www.mysite.com/solution that I want to redirect to http://mysite.s3-website-us-east-1.amazonaws.com/product.html So I created an empty file named solutioninside my bucket with the correct metadata:

Website Redirect Location= /product.html

The S3 redirect metadata is equivalent to a 301 Moved Permanentlythat is great for SEO. This works great when accessing the URL directly from S3 domain.

I have also set up a CloudFront distribution based on the website bucket. And when I try to access through my distribution, the redirect does not work, ie:

http://xxxx123.cloudfront.net/solution does not redirect but download the empty file instead.

So my question is how to keep the redirection through the CloudFront distribution ? Or any idea on how to handle the redirection without deteriorate SEO ?

Thanks

Troubleshooting Redmine (Bitnami Stack) performance

Posted: 26 Jun 2021 06:01 PM PDT

I've got a Redmine instance (Bitnami Stack) that's unusually slow. Because I'm just trying to get to the bottom of this, I have some theories which I'd like to discuss here. So, if anybody has any ideas about this, please feel free to help :-)

System:

Bitnami Stack with Redmine 1.4.x upgraded to Bitnami Stack with Redmine 2.1.0 like this:

  • mysqldump'd the old database
  • installed new Bitnami Stack with Redmine 2.1.0
  • imported the dump cleanly with recreating all tables
  • rake db:migrate and all that

The stack is running on a Virtual Machine with OpenSUSE 12.1. The resources shouldn't be a problem, as there are always multiple gigabytes of free RAM and CPU spikes on Redmine requests go only up to 50% of 2 CPU cores. Also, there are only a few users accessing it.

What may be totally important: User login is handled via LDAP (ActiveDirectory).

Problem:

On each request, Redmine reacts unusually slow. Sometimes it takes 3 seconds, sometimes even up to 10 seconds to deliver the page.

My thoughts:

  • I don't know if "On-the-fly user creation" is checked in Redmine's LDAP settings, I can only check this one later today. But could the lack of a check here be a problem? Authentication takes a moment when logging in that's normal and acknowledged. But when not creating the user on the fly, does it keep a session only or does it re-authenticate on each request, so that could be the problem?
  • Is Redmine 2.x maybe so much slower than 1.4.x that it's just plain normal?
  • Is Bitnami's Apache2+Passenger config faulty?
  • MySQL indexes wouldn't be a problem given the fact that MySQL is very calm on the CPU, would it?

One more thing that seems very odd to me, but maybe a false measurement result (need to re-check this tomorrow when I see the machine):

I tried to check if it's a network problem (network reacting slow, maybe DNS or something; server is in the local network). It seemed like requests on localhost (Browser directly on the OpenSUSE VM) were fast, but requests over the network weren't. Usually, I would think of a network problem, but the strange thing is: When actually measuring connect times, the network is fast as hell. Ping is good, static delivery times too. It seemed like only Redmine-side calculated pages are slowly sent by the application server while Apache's still fast - but only when the request is a remote LAN request. Very strange … but as I mentioned above, I have to re-check this one. It just seems illogical to me.

rsync --files-from specifies folders but does not copy files, recursive does not fix it

Posted: 26 Jun 2021 07:03 PM PDT

I've just recently revisited a script that I maintained a few years back and trying to get it to work on a new server.

It basically runs the following

rsync -vaP --copy-unsafe-links --files-from=dirlist.txt --exclude-from=excludelist.txt . /path/to/backup  

dirlist.txt is a plain text file that contains directories with files I want synced relative to .

foo/  foo/bar  foo/bar/gop  foo/tra  foo/bla  foo/bla/rgh  foo/bla/rgh/meh  

and excludelist.txt is a plain text file that contains the path to specific files within the above directories that I want to exclude from the rsync command.

When I run the above I get

/path/to/backup/foo  /path/to/backup/foo/bar  /path/to/backup/foo/bar/gop  /path/to/backup/foo/tra  /path/to/backup/foo/bla  /path/to/backup/foo/bla/rgh  /path/to/backup/foo/bla/rgh/meh  

But none of the files that were in the source directory were copied over?

I've tried using the -r command but then I end up getting directories that I don't want like foo/don or foo/tco copied when it runs the rsync for foo/.

I know this script has run in the past so I'm very confused as to what has changed (other then maybe the rsync version but I can't track down the version I had last ran on).

Update: I'm using various versions of rsync from 2.5.7 to 2.6.6. In 2.5.7 it doesn't have --files-from nor --Filter

automatically forward all network traffic to a proxy

Posted: 26 Jun 2021 07:03 PM PDT

I've used cntlm to automatically add NTLM headers to https requests when ssh'ing to a particular host.

What I need to do now is to send all outbound internet traffic (80/443) from any program running on machine A through a proxy server running on machine B transparently.

Machine A and B are on different networks (over the internet)

Is this at all possible? If yes, I would appreciate a quick how-to..

No comments:

Post a Comment