Monday, April 5, 2021

Recent Questions - Server Fault

Recent Questions - Server Fault


How can I enable Hyper-V in MSI Modern 14 B4MW model?

Posted: 05 Apr 2021 10:15 PM PDT

I have tried all the thing to enable Hyper-V in my windows-10 64 bit laptop ,but I am unable to do it I am unable to find any option to enable it either in BIOS nor in Turn Window Feature On Off. My Machine model is Modern 14 B4MW-238IN. Any help would be appreciatable. Thanks

How to add a custom OpenSSL engine with OpenSSL and use from apache server?

Posted: 05 Apr 2021 10:01 PM PDT

I have a custom-built OpenSSL engine. I'm trying to make changes to openssl.cnf to load this engine automatically. My ultimate goal is to use this engine for Apache mod-ssl.

Apache mod_ssl to use OpenSSL ENGINE on Ubuntu 14.04, address my issue and I tried to follow the suggested solution. I have installed OpenSSL 1.1.1c from source code with following configuration,

./config --prefix=/opt/openssl -DOPENSSL_LOAD_CONF --openssldir=/opt/openssl/ssl   

According to Where to copy custom openssl engine library in openssl 1.1.0, I added the following changes to openssl.cnf to load my engine automatically,

openssl_conf = openssl_def    [openssl_def]  engines = engine_section    [engine_section]  rsa-engine-new = rsa_section    [rsa_section]  engine_id = rsa-engine-new  #dynamic_path = /opt/openssl/lib/engines-1.1/rsa-engine-new.so  <-- Uncomment this line cause segmentation fault  

After making the changes, running openssl engine shows the following,

root@ss:/opt/openssl/ssl# openssl engine   rsa-engine-new  (rdrand) Intel RDRAND engine  (dynamic) Dynamic engine loading support  (rsa-engine-new) engine for testing 1  140496290879232:error:260AB089:engine routines:ENGINE_ctrl_cmd_string:invalid cmd name:crypto/engine/eng_ctrl.c:255:  140496290879232:error:260BC066:engine routines:int_engine_configure:engine configuration error:crypto/engine/eng_cnf.c:141:section=rsa_section, name=oid_section, value=new_oids  140496290879232:error:0E07606D:configuration file routines:module_run:module initialization error:crypto/conf/conf_mod.c:177:module=engines, value=engine_section, retcode=-1        

The output of openssl engine shows some error, but my engine loaded automatically and use as a default engine.

Then I install httpd-2.4.10 from the source code with the following configuration,

CFLAGS='-DSSL_EXPERIMENTAL_ENGINE -DSSL_ENGINE -DOPENSSL_LOAD_CONF' ./configure --prefix=/etc/apache2 --enable-ssl --with-ssl=/opt/openssl/ssl --with-pcre=/usr/local/pcre --enable-so  

After the installation, I have uncommented Include conf/extra/httpd-ssl.conf from httpd.conf. I added the following changes to /etc/apache2/conf/extra/httpd-ssl.conf file,

SSLCryptoDevice rsa-engine-new  <-- line 31  #SSLCryptoDevice /opt/openssl/lib/engines-1.1/rsa-engine-new  

When I try to restart the httpd server, I get he following error,

root@ss:/etc/apache2/bin# ./httpd -k restart  AH00526: Syntax error on line 31 of /etc/apache2/conf/extra/httpd-ssl.conf:  SSLCryptoDevice: Invalid argument; must be one of: 'builtin' (none), 'rdrand' (Intel RDRAND engine), 'dynamic' (Dynamic engine loading support)  

So, my question is,

  1. why openssl engine throws error when the engine is working? And how can I fix this?
  2. How can I configure httpd-ssl.cnf to use mod-ssl?

multiple htaccess redirects to a new domain

Posted: 05 Apr 2021 07:23 PM PDT

I'm trying to redirect an entire domain over to another domain while also redirecting a couple of specific pages to specific locations. I also want to preserve www and https on the original domain to ensure there are no errors on the old domain and any links will still work/redirect correctly.

Here's what I've got so far -- the https, www, and sitewide redirect work fine, but the page to page redirect(s) don't do anything:

RewriteEngine on    # Force HTTPS and WWW   RewriteCond %{HTTP_HOST} !^www\.(.*)$ [OR,NC]  RewriteCond %{https} off    RewriteRule ^(.*)$ https://www.OLDSITE.com/$1 [R=301]    RewriteRule ^/?OLDPOST1/?(.*)$ https://NEWSITE.com/NEWPOST1/ [L,QSA,R=302]  RewriteRule ^/?OLDPOST2/?(.*)$ https://NEWSITE.com/NEWPOST2/ [L,QSA,R=302]  RewriteRule ^/?OLDPOST3/?(.*)$ https://NEWSITE.com/NEWPOST3/ [L,QSA,R=302]    RewriteRule ^(.*) https://NEWSITE.com [L,QSA,R=302]  

Any idea what I'm doing wrong? Thanks in advance!

IPTables to protect Redis and PostgreSQL ports

Posted: 05 Apr 2021 06:44 PM PDT

I'm still new to the linux world, but as soon as I learned about IPTables I decided to use this mechanism to secure sensitive services. The idea was simple: only allow connections from a trusted IP addresses to specific port numbers.

I created a chain:

iptables -N mychain  iptables -A mychain --src 127.0.0.1 -j ACCEPT  iptables -A mychain --src IP_ADDRESS_1 -j ACCEPT  iptables -A mychain --src IP_ADDRESS_2 -j ACCEPT  iptables -A mychain -j REJECT --reject-with icmp-port-unreachable  

and I used this chain to filter Redis (#6379) and PostgreSQL (#5432) ports:

iptables -I INPUT -m tcp -p tcp --dport 6379 -j mychain  iptables -I INPUT -m tcp -p tcp --dport 5432 -j mychain  

However, it didn't work... I was able to connect to Redis using telnet from IP_ADDRESS_3.

What am I missing here? Any help appreciated! I'm using Ubuntu 18.04 (LTS)

Keystone as a standalone Identity Service

Posted: 05 Apr 2021 06:25 PM PDT

Is it possible to use Keystone as a standalone Identity Service with external services or is it tightly integrated with the OpenStack platform? Is it a supported scenario?

I guess I'm trying to figure out how "unnatural" it is, how far it's from a normal flow.

I'm looking for a Authentication/Authorization solution for several apps deployed in K8S.

Thank you.

cPanel Exim using SMTP2Go do not allow auto-forwarding of mail

Posted: 05 Apr 2021 06:06 PM PDT

I am using SMTP2Go to relay my emails. They are informing me that they do not allow auto-forwarding of mail and it violates their Terms of Service. SMTP2Go is working fine, but I need to have Exim to route all emails that are being forward to localhost and not via their services. I checked all of the cPanels forums and the closest that i came up with was these settings, but are not sure how to apply them in my case.

send_via_ses: driver = manualroute domains = ! +local_domains condition = ${if !eqi{$local_part@$domain}{EMAILADDRESS}} condition = ${if !eqi{$local_part@$domain}{EMAILADDRESS2}} transport = ses_smtp self = send route_list = * localhost

Using nobarrier with ext4 and google persisten disks

Posted: 05 Apr 2021 08:38 PM PDT

I was looking into speeding up some heavy DB write workload on google cloud vm. I saw nobarrier option for ext4 FS can provide some performance boost. I was wondering if anyone knows if it is safe to use nobarrier option with google persistent storage (Balanced PD). My understanding is that If your disks are battery-backed in one way or another, disabling barriers may safely improve performance, but I dont know how that applies to google balanced PD storage. Will I have more FS corruption/problems if my VM hangs or if I perform hard reset of the VM while having write operations happening, when compared to no using nobarrier option?

SPF neutral on mail sent with SMTP from Gmail. SPF pass on mail sent from inbox for that address on G Suite

Posted: 05 Apr 2021 05:30 PM PDT

The domain name I'm trying to debug is dallaspetsalive.org. This is a G Suite domain.

When I log directly into my G Suite email account for the dallaspetsalive.org address and send an email, I get SPF pass result:

Received-SPF: pass (google.com: domain of someaccount@dallaspetsalive.org designates 209.85.220.41 as permitted sender) client-ip=209.85.220.41;

When I log into my personal gmail account and send email as the dallaspetsalive.org address using SMTP I've set up in the settings, I get SPF neutral result:

Received-SPF: neutral (google.com: 209.85.220.41 is neither permitted nor denied by domain of someaccount@dallaspetsalive.org) client-ip=209.85.220.41;

Does using SMTP in another inbox to send email ruin SPF? It appears to be checking the same IP, but giving a pass result in one case and a neutral case in the other, and I can't make any sense of it.

This is what I have for the TXT record for SPF. The non-Google addresses are services we use:

v=spf1 include:servers.mcsv.net include:_spf.neonemails.com include:_spf.google.com ~all

Thanks for any help you can provide. I am trying to optimize our email settings to avoid mail going to spam.

Trying to redirecting ip(boxip)to port. Getting this error. Using nginx

Posted: 05 Apr 2021 04:59 PM PDT

Getting this error: The page isn't redirecting properly

An error occurred during a connection to hidden.

This problem can sometimes be caused by disabling or refusing to accept cookies.

server {      server_name bot.hidden.com;      location / {          proxy_set_header Host $host;          proxy_pass http://bot.hidden.com:1235;          proxy_redirect off;      }        listen 443 ssl; # managed by Certbot      ssl_certificate /etc/letsencrypt/live/bot.hidden.com/fullchain.pem; # managed by Certbot      ssl_certificate_key /etc/letsencrypt/live/bot.hidden.com/privkey.pem; # managed by Certbot      include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot      ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot    }  server {      if ($host = bot.hidden.com) {          return 301 https://$host$request_uri;      } # managed by Certbot          listen 80;      server_name bot.hidden.com;      return 404; # managed by Certbot      }  

Static Routing Server Rack

Posted: 05 Apr 2021 05:21 PM PDT

I am currently installing a server rack for a customer, and they've asked a qauestion.

They want/need to optimise U space as to avoid paying overheads for U's that aren't needed.

The question they've asked, is whether or not they can use Static Routing with a L3 Switch.

This in their eyes would remove the need for 1 U, and would combine Switch/Router into 1 U.

It's a valid question, however I am not sure how to answer it as ARP came to mind.

So, can we replace the Router/Switch with a good L3 Switch?

And if so, can we statically route to each Server in the Rack without generating massive ARP cache?

Trying to programmatically create new Windows user's profile

Posted: 05 Apr 2021 10:43 PM PDT

Trying the low-impact solution as mentioned in the post:

[https://serverfault.com/questions/946882/how-can-i-programmatically-cause-a-new-windows-users-profile-to-be-created]

Suggestion was to run a command as that user using psexec.exe for Windows to create the profile:

psexec.exe -u <domain/user name for AD user> -p <password> cmd.exe /c exit  

I'm running it local to the VM I've created. Getting the message :

PsExec could not start cmd.exe:  The user name or password is incorrect  

Can someone give me insight into what I'm doing wrong? My purpose is to create the user folder in the C:\Users area based upon the AD user I set up, without them or me having to log in to toggle this user folder creation.

Video Capture Devices available in RDP (mstsc) but not Virtual Machine Connection (vmconnect)?

Posted: 05 Apr 2021 10:02 PM PDT

I'm trying to pass my webcam through to a Hyper-V VM using Virtual Machine Connection (vmconnect). In the "Local Devices and resources" settings, I do not have the ability to do this:

vmconnect local devices and resources

However using Remote Desktop Connection (mstsc) with the same host & guest, I am able to pass through the webcam: mstsc local devices and resources

My understanding is that vmconnect uses the same libraries as mstsc. I've checked the versions of both and they're both the same (10.0.18362.1). Both client and server are the same - Windows 10 1909.

Is there a setting or option I'm missing or is this just not available in vmconnect?

AWS EC2 ENA Support on Linux; Unable to connect after changing instance type

Posted: 05 Apr 2021 09:14 PM PDT

I'm in the process of upgrading several of our EC2 instances from type T2 to T3. This requires enabling ENA support. I've successfully upgraded 3 of 4 instances, but the last one is having issues.

I've enabled ENA, just like the other instances, changed the instance type to T3.2xlarge, and started the instance. When I attempt to SSH into it, SSH attempts to make the connection to the instance but gets no response. I get the same result trying to make it an M5 or M4 instance as well. However, starting it as a T2 or M3, I'm able to connect to it just fine.

The OS is Ubuntu 16.04.1 LTS and ENA support is enabled:

ubuntu@ip-172-xx-xx-xxxx:/$ modinfo ena  filename:       /lib/modules/4.4.0-150-generic/kernel/drivers/net/ethernet/amazon/ena/ena.ko  version:        2.0.3K  license:        GPL  description:    Elastic Network Adapter (ENA)  author:         Amazon.com, Inc. or its affiliates  srcversion:     E19C939F9F1A3B8E900815D  alias:          pci:v00001D0Fd0000EC21sv*sd*bc*sc*i*  alias:          pci:v00001D0Fd0000EC20sv*sd*bc*sc*i*  alias:          pci:v00001D0Fd00001EC2sv*sd*bc*sc*i*  alias:          pci:v00001D0Fd00000EC2sv*sd*bc*sc*i*  depends:  retpoline:      Y  intree:         Y  vermagic:       4.4.0-150-generic SMP mod_unload modversions  parm:           debug:Debug level (0=none,...,16=all) (int)    ubuntu@ip-172-xx-xx-xxxx:/$ aws ec2 describe-instances --instance-ids i-000scrubbed000 --query "Reservations[].Instances[].EnaSupport"  [         true  ]  

Anyone have thoughts/ideas?

How can I add a certificate to a Windows service's certificate store from the command line?

Posted: 05 Apr 2021 06:04 PM PDT

I want to add a certificate to the certificate store belonging to a Windows service, from the command line. So far, the only thing I've found is:

certutil -service -store ADAM_Instance-Name\My  

When I run it (logged on as myself, in a Command Prompt as Administrator) it returns:

ADAM_Instance-Name\My  CertUtil: -store command FAILED: 0x80070057 (WIN32: 87)  CertUtil: The parameter is incorrect.  

I've tried wrapping the Service\Store name in double quotes (same result) and single quotes (same result) and using a forward slash or space instead of the backslash, both giving:

ADAM_Instance-Name\My  CertUtil: -store command FAILED: 0x80070002 (WIN32: 2)  CertUtil: The system cannot find the file specified.  

Can anyone help with the syntax for this command, or help with an alternative method?

timestamp in shell code script

Posted: 05 Apr 2021 05:07 PM PDT

I'm new in shell coding and hopefully I'm in the right place to ask this question. I'm working with a shell script which collect daily file and send them via ftp. In the filename there are both date and time and type of files is text(.txt). Files are in a directory in which there are many other files that are created daily at different time. Those files that I'm trying to send are created at 08 am but with different minute and seconds. for example:

team1_mnpg_ef_part_2018-02-26_080005.txt  team1_abc_part_2018-02-26_080031.txt  

Time is different in the filename but all of them have 08 for hour and I want to send all files that has for example today date and 08 as hour with whatever minute and seconds(it doesn't matter which minute and second). Here is my code:

#!/bin/ksh  #  #    DATE=`date "+%Y-%m-%d"`    HOST='abcd.dmn.com'  USER='*****'  PASSWD='******'  LOCALPATH='/opt/abc/Output'  LOGPATH='/opt/abc/logs'      ftp -n -v $HOST <<END_SCRIPT>>$LOGPATH/ftp_abc_$DATE_log.txt  quote USER $USER  quote PASS $PASSWD  lcd $LOCALPATH  put team1_abc_part_$DATE_(I do not know what should I write here).txt  put team1_mnpg_ef_part_$DATE_.txt  put team1_fdop_part_$DATE_.txt    quit  END_SCRIPT  exit 0  

Files are in the directory and when I write the timestamp,for example put team1_abc_part_$DATE_080031.txt in the above script, it works fine and I can see the sent file on destination server.

What should I write for time part in the filename so that script sends the files that has been created at 08 with whatever minute and second?

Nginx HTTP2 IOS 11 not working

Posted: 05 Apr 2021 10:00 PM PDT

i have problems with HTTP2 protocol on my NGINX server, this is my configuration

listen 443 ssl http2;  server_name adomain.com;  root /var/www/project;    limit_req   zone=one  burst=60 nodelay;    add_header Strict-Transport-Security "max-age=2592000; includeSubdomains;" always;  ssl_certificate     /etc/letsencrypt/live/fullchain.pem;  ssl_certificate_key /etc/letsencrypt/live/privkey.pem;  ssl_protocols   TLSv1 TLSv1.1 TLSv1.2;  ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:ECDHE-RSA-AES128-GCM-SHA256:AES256+EECDH:DHE-RSA-AES128-GCM-SHA256:AES256+EDH:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:DES-CBC3-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4";  ssl_prefer_server_ciphers on;  ssl_session_cache   shared:SSL:10m;  ssl_session_timeout 10m;  ssl_dhparam /etc/nginx/ssl/dhparam.pem;    resolver 8.8.8.8;  ssl_stapling on;  ssl_stapling_verify on;    keepalive_timeout   70;  

I can't see the error on my iOS device (safari 11), it's very strange the webpage is a SPA ( angular ) that app makes requests to an API, the apps loads over HTTP2 but when the app has to make requests to the API it fails, disabling HTTP2 from the listen makes everything works as espected

The ciphers for both servers frontend/backend are the same

In Chrome/Firefox/IE works fine, i don't know what is wrong with Safari or my server config

The error.log and adomain-error.log are empty when Safari fails

Nginx Version

nginx version: nginx/1.12.2  built by gcc 4.8.5 20150623 (Red Hat 4.8.5-16) (GCC)   built with OpenSSL 1.0.2k-fips  26 Jan 2017  TLS SNI support enabled  configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib64/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-compat --with-file-aio --with-threads --with-http_addition_module --with-http_auth_request_module --with-http_dav_module --with-http_flv_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_mp4_module --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module --with-http_slice_module --with-http_ssl_module --with-http_stub_status_module --with-http_sub_module --with-http_v2_module --with-mail --with-mail_ssl_module --with-stream --with-stream_realip_module --with-stream_ssl_module --with-stream_ssl_preread_module --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -fPIC' --with-ld-opt='-Wl,-z,relro -Wl,-z,now -pie'  

UPDATE

The console on my IPhone says Protocol error so i'm pretty sure that it's an error of the IOS 11

UPDATE 2

I have found this post

https://www.nginx.com/blog/http2-theory-and-practice-in-nginx-stable-13/  

It explains that if you support TLSv < 1.2 you will end up in a PROTOCOL ERROR , leaving in my server config just TLSv1.2 makes the app works again, but it's buggy , some requests will fail ... that's beyond my comprehension, once again in Chrome/Firefox it's working but in my mobile safari it doesn't

UPDATE 3 [2019/02/28]

There was a bug on our NGINX config for the OPTIONS Method of a CORS request causing duplicated Content-Length and Content-Type headers to be responded, after we solve that the app started working fine in HTTP/2, we also changed the status of the OPTIONS response from 200 to 204

snmpget error: “No Such Object available on this agent at this OID”

Posted: 05 Apr 2021 05:07 PM PDT

I want to create my own MIB. I'm struggling on this from couple of weeks. I followed this tutorial and using net-snmp 5.7.3. What I'm doing is:

My setup: I have two VM's, both Ubuntu 16, one is snmp-server with IP:192.168.5.20 and the other snmp-agent with IP:192.168.5.21. I wrote a MIB, which compiles good without any error (This compilation is done only on the agent system, not on the server). I have already done this:

root@snmp-agent:# MIBS=+MAJOR-MIB      root@snmp-agent:# MIBS=+DEPENDENT-MIB      root@snmp-agent:# export MIBS      root@snmp-agent:# MIBS=ALL  

My MIB files are in this path: /usr/share/snmp/mibs which is the default search path. I've already compiled it and generated .c and .h files successfully with the command: mib2c -c mib2c.int_watch.conf objectName. And than configured the snmp like this:

root@snmp-agent:# ./configure --with-mib-modules="objectName"  root@snmp-agent:# make  root@snmp-agent:# make install      

Everything worked fine. After this when I do (on the agent) snmptranslate I get the output as:

root@snmp-agent:snmptranslate -IR objectName.0  MAJOR-MIB::objectName.0  

And with the command snmptranslate -On objectName.0 I get output as:

root@snmp-agent:# snmptranslate -On MAJOR-MIB::objectName.0  .1.3.6.1.4.1.4331.2.1.0  

So, I'm getting the expected outputs on the agent system. Now my problem is I don't know how to get the same values from my server!

When I run snmpget, from the server, I get this error:

root@snmp-server:# snmpget -v2c -c public 192.168.5.21 MAJOR-MIB::objectName.0  MAJOR-MIB::objectName.0 = No Such Instance currently exists at this OID  

Output when specified the OID:

root@snmp-server:# snmpget -v2c -c public 192.168.5.21 .1.3.6.1.4.1.4331.2.1  SNMPv2-SMI::enterprises.4331.2.1 = No Such Instance currently exists at this OID  

Output when I do these:

root@snmp-server:# snmpget -v2c -c public 192.168.5.21 sysDescr.0  SNMPv2-MIB::sysDescr.0 = STRING: Linux snmp-agent 4.10.0-33-generic #37~16.04.1-Ubuntu SMP Fri Aug 11 14:07:24 UTC 2017 x86_64    root@snmp-server:# snmpwalk -v2c -c public 192.168.5.21 .1.3.6.1.4.1.4331.2.1  SNMPv2-SMI::enterprises.4331.2.1 = No more variables left in this MIB View (It is past the end of the MIB tree)  

I have searched it and still searching but no luck. What should I do? How should I use snmpget from my server on my own MIBs? I mean something like I do with sysDescr.0 from my server.

I want to do this: snmpget 192.168.5.21 myObjectName.0 and get the values.

EDIT: I have already seen these answers, but doesn't works. snmp extend not working and snmp no such object...

UPDATE 2:

When I do snmpwalk on server:

snmp-server:# snmpwalk -v 2c -c ncs -m DISMAN-PING-MIB 192.168.5.21 .1.3.6.1.2.1.80  DISMAN-PING-MIB::pingObjects.0 = INTEGER: 1  DISMAN-PING-MIB::pingFullCompliance.15.46.49.46.51.46.54.46.49.46.50.46.49.46.56.48 = STRING: "/bin/echo"  DISMAN-PING-MIB::pingMinimumCompliance.15.46.49.46.51.46.54.46.49.46.50.46.49.46.56.48 = ""  DISMAN-PING-MIB::pingCompliances.4.15.46.49.46.51.46.54.46.49.46.50.46.49.46.56.48 = ""  DISMAN-PING-MIB::pingCompliances.5.15.46.49.46.51.46.54.46.49.46.50.46.49.46.56.48 = INTEGER: 5  DISMAN-PING-MIB::pingCompliances.6.15.46.49.46.51.46.54.46.49.46.50.46.49.46.56.48 = INTEGER: 1  DISMAN-PING-MIB::pingCompliances.7.15.46.49.46.51.46.54.46.49.46.50.46.49.46.56.48 = INTEGER: 1  DISMAN-PING-MIB::pingCompliances.20.15.46.49.46.51.46.54.46.49.46.50.46.49.46.56.48 = INTEGER: 4  DISMAN-PING-MIB::pingCompliances.21.15.46.49.46.51.46.54.46.49.46.50.46.49.46.56.48 = INTEGER: 1  DISMAN-PING-MIB::pingIcmpEcho.1.15.46.49.46.51.46.54.46.49.46.50.46.49.46.56.48 = ""  DISMAN-PING-MIB::pingIcmpEcho.2.15.46.49.46.51.46.54.46.49.46.50.46.49.46.56.48 = ""  DISMAN-PING-MIB::pingIcmpEcho.3.15.46.49.46.51.46.54.46.49.46.50.46.49.46.56.48 = INTEGER: 1  DISMAN-PING-MIB::pingIcmpEcho.4.15.46.49.46.51.46.54.46.49.46.50.46.49.46.56.48 = INTEGER: 0  DISMAN-PING-MIB::pingMIB.4.1.2.15.46.49.46.51.46.54.46.49.46.50.46.49.46.56.48.1 = ""  

When I do snmpget with pingFullCompliance.15.46.49.46.51.46.54.46.49.46.50.46.49.46.56.48:

root@snmp-server:# snmpget 192.168.5.21 DISMAN-PING-MIB::pingFullCompliance.15.46.49.46.51.46.54.46.49.46.50.46.49.46.56.48  DISMAN-PING-MIB::pingFullCompliance.15.46.49.46.51.46.54.46.49.46.50.46.49.46.56.48 = Wrong Type (should be INTEGER): STRING: "/bin/echo"  

So where am I going wrong? And what is pingFullCompliance.15.46.49.46.51.46.54.46.49.46.50.46.49.46.56.48 ? Why such a long OID?

Where am I going wrong? Can anyone point me in the right direction? Any suggestions are greatly appreciated.

Apache 2.4 restrict access

Posted: 05 Apr 2021 09:01 PM PDT

I got the following directories in my /var/www/htdocs:

test123/  test123/cache/  test456/  test456/cache/  test789/  test789/cache/  another_directory/cache/  

I would like to achieve this:

  • access to / for everyone
  • access to /test123/test.htm + /test456/test.htm + /test789/test.htm for the ip-address 192.168.1.10
  • no access to all cache-directorys

So I got the following apache 2.4 configuration, but it is not working as expected, because I am still able to access the cache-directories test123/cache, test456/cache and test789/cache.

<VirtualHost *:80>          DocumentRoot /var/www/htdocs            <Directory "/var/www/htdocs">                  Options -Indexes +FollowSymLinks                  AllowOverride None          </Directory>            <Directory  ~ "/var/www/htdocs/test(123|456|789)">                  Require ip 192.168.1.10          </Directory>            <Directory  "/var/www/htdocs/*/cache">                  Require all denied          </Directory>  </VirtualHost>  

What am I doing wrong? Thanks for your help! :)

How can I create a PKCS12 File using OpenSSL (self signed certs)

Posted: 05 Apr 2021 06:49 PM PDT

I have a bit9 server, and I'm fairly new to the environment, as well as certs. The area to upload the cert says "Import Server Certificate From PKCS12 File"

I'm going to just use a self signed cert (I'm hoping it's ok with that), and I'm running the below command to do so.

openssl req -x509 -newkey rsa:4096 -keyout bit9.pem -out cert.pem -days 365  

Is that what I should have done, and if so, how do I get this to a PKCS12 File?

I've been looking around, and found the below command:

Convert a PEM certificate file and a private key to PKCS#12

openssl pkcs12 -export -out <certificate.pfx> -inkey <privateKey.key> -in <certificate.crt> -certfile <CACert.crt>  

Since I only have a pem file...I'm not sure how to do this.

Nginx - Deny folder, except subfolders with regex

Posted: 05 Apr 2021 08:03 PM PDT

I want to deny access to anything in the app directory, except in subfolders with an Assets folder.

For example:

Allow these files /app/bundles/ApiBundle/Assets/css/Thing.js /app/bundles/AssetsBundle/Assets/css/mautic.css

Deny these: ``` /app/bundles/Whatever/Config/config.php /app/bundles/AppCache.php /app/whatever.php

This works fine (https://serverfault.com/a/450378/310646)... if I don't use regex.

For example, this works:

location ^~ /app/bundles/ApiBundle/Assets/ {      allow all;  }    location ^~ /app/ {       deny all;   }  

but this does not:

location ^~ /app/bundles/.+/Assets/ {      allow all;  }    location ^~ /app/ {       deny all;   }  

Any ideas?

How to increase heap size for ws_ant.sh when deploying on WebSphere 8.5 (64-bit Linux)

Posted: 05 Apr 2021 07:02 PM PDT

TL;DR -- How do I give ws_ant.sh and/or the <wsInstallApp> task more heap at runtime?

I am attempting to deploy a relatively large (~160-MB) EAR file to WebSphere 8.5 running on a 64-bit Linux platform.

Here is the task I have in my build.xml:

<wsInstallApp      ear="/my/ear/file/location/New.EAR"      properties="jvm.properties"      options="-appname myNewEarApp -update -deployws"      host="localhost"      conntype="SOAP"      user="the_username"      password="not_telling_you"      failonerror="true" />  

Executing it with the ws_ant.sh packaged with WAS results in an OutOfMemoryError and heap dumps.

So, I need to increase the heap available to the task (or ws_ant itself?) at runtime, but I cannot figure out the proper place to do so. I tried modifying wsadmin.sh, and while that has an effect if I run my deployment as a Jython script with wsadmin.sh directly, it does not seem to have any impact whatsoever on the execution of <wsInstallApp> from within the Ant script.

According to the IBM documentation of wsInstallApp:

The properties attribute is optional and it contains a java properties file containing attributes to set in the JVM System properties

In my jvm.properties file, I tried:

[user@localhost]$ cat jvm.properties  -Xms4096m  -Xmx4096m  

That had no effect. Executing ws_ant.sh with the -v verbose flag showed that, somewhere, the -Xmx value is set as -Xmx256m. I tried several other hair-brained combinations and formats, but nothing seems to work.

I also tried adding arguments onto the ws_ant.sh call:

[user@localhost]$ ws_ant.sh -Xms4096m -Xmx4096m -v -f build.xml was.deploy  

... but that also seems to do nothing.

What am I doing wrong? I concede that, if pressed, I could probably meet my requirements by re-writing the deployment using wsadmin.sh and a Jython script, but I'm trying to leverage some extensive Ant scripting from a different EAR application.

Alternatives? I also recognize that I could use the <wsadmin> Ant task to call some Jython scripts from within Ant-- I have not yet tried this-- but again, we already have some extensive scripting otherwise. What are the relative advantages and disadvantages of one way versus the other? (i.e., executing wsadmin.sh/Jython script via <[ssh]exec> or <wsadmin> versus <wsInstallApp> [and its "ws_____" siblings]).

How to troubleshoot internet unreachable periodically through firewall

Posted: 05 Apr 2021 07:02 PM PDT

Periodically the internet is unreachable on my network, sometimes for 30+ minutes. After testing a direct connection to our modem, I realized this was not a problem with our ISP but the network itself.

What I've tried:

  • I can ping the firewall.
  • Restarting the firewall fixes the connection.
  • Disconnecting the firewall from the switch fixes the connection.
  • When I connect my computer directly to the firewall, I still cannot get out, but when I disconnect the firewall from the switch, I don't have any problems.

What should my next steps be for troubleshooting this? I know how to use Wireshark, but I'm a bit of a noob and don't know what to look for. I did notice while the internet was working that one of my switches was putting out a lot of ARP requests compared to the others, asking for the same IPs over and over. I'm not sure if this is normal or not, though. Also, the switch keeps sending Spanning Tree packets that say "Topology Change Notification" in Wireshark.

Reading a few similar questions on SO it sounds like I might have a loop somewhere in the network causing all the ARP requests. I'm not sure why it would be just the one switch sending them out so much, though, stead of all three on our network. I don't see any obvious looping in our setup, but I'm not sure how to rule this out, either.

Update Network diagram:

Modem --- Firewall --- Switch --- (multiple connections to other computers and switches on the network)

Apache Rewrite rules for SSL in sub domain

Posted: 05 Apr 2021 06:04 PM PDT

I have a web site deployed that uses kohana and URL rewriting to make the URLs more restful. This works fine.

I also have Moodle installed in a sub directory on the same server and a subdomain defined for this directory. So Moodle is installed in a directory called students and the subdomain is students.example.com. This too works fine.

I am now attempting to install an SSL certificate that I only need on the sub domain. I have a Comodo wildcard certificate so it is supposed to be able to work with the subdomains. When I use https://example.com it works fine so I can see that the SSL certificate is in force. However, when I try https://students.example.com it redirects to the main site. http://students.example.com works fine though.

The .htaccess file that works for the kohana rewrite rules is:

# Use PHP5.4 Single php.ini as default  AddHandler application/x-httpd-php54s .php  # Turn on URL rewriting  RewriteEngine On    # Installation directory  RewriteBase /    # Protect hidden files from being viewed  <Files .*>     Order Deny,Allow     Deny From All  </Files>    # Protect application and system files from being viewed  RewriteRule ^(?:application|modules|system)\b index.php/$0 [L]    # Allow any files or directories that exist to be displayed directly  RewriteCond %{REQUEST_FILENAME} !-f  RewriteCond %{REQUEST_FILENAME} !-d    # Rewrite all other URLs to index.php/URL  RewriteRule .* index.php/$0 [PT]  Options -Indexes  

According to the docs I will need the following rules to be added for the subdomain:

#.htaccess WildCard SSL   RewriteCond %{HTTP_HOST} ^students.example.com$   RewriteCond %{REQUEST_URI} !^/students/   RewriteCond %{REQUEST_FILENAME} !-f   RewriteCond %{REQUEST_FILENAME} !-d   RewriteRule ^(.*)$ /students/$1   RewriteCond %{HTTP_HOST} ^students.example.com$   RewriteRule ^(/)?$ students/index.php [L]   

I tried adding this as the first rule and as the second rule but neither worked. I now understand that I will have to write a new set of rules to do what I want.

Any advice on how to accomplish this would be greatly appreciated. This site is hosted with Bluehost if that makes any difference.

Can Robocopy be configured to only log the “errors”?

Posted: 05 Apr 2021 07:41 PM PDT

Can Robocopy be configured to only log the "errors"?

On a large copy job, I'm really only interested in knowing what files were NOT copied.

ifconfig eth0 RX dropped packets

Posted: 05 Apr 2021 06:49 PM PDT

The problem

The ifconfig command shows more and more dropped packets in the RX section. So, there seem to be a problem to some packets arriving from the Internet to my server.

The questions

  1. What kind of packets does this drop counter takes into account? Does it take all packets arriving, before reaching the iptables firewall, or after the packets have been accepted by iptables?

  2. How to solve the situation so that the ipconfig drop packets counter stops to increase?

Useful troubleshooting infos

Since I don't know what my problem really is, feel free to ask me to complete this section if you think some other info would be needed.

ifconfig

eth0      Link encap:Ethernet  HWaddr 00:cc:cc:cc:cc:cc              inet adr:90.0.0.2  Bcast:90.0.0.255  Masque:255.255.255.0            adr inet6: fe80::21c:c0ff:feb9:829c/64 Scope:Lien            adr inet6: 2001:a100:1:bbbb::1/64 Scope:Global            UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1            RX packets:113264620 errors:0 dropped:2523 overruns:0 frame:0            TX packets:168526529 errors:0 dropped:0 overruns:0 carrier:0            collisions:0 lg file transmission:1000             RX bytes:59171827564 (55.1 GiB)  TX bytes:223993117711 (208.6 GiB)  

Note the "dropped:2523" in the RX section. This is the most important. This number is continuously increasing.

ip -4 route show

default via 90.0.0.254 dev eth0   90.0.0.0/24 dev eth0  proto kernel  scope link  src 90.0.0.2  

ip -6 route show

2001:a100:1:bbbb::1/64 dev eth0  proto kernel  metric 256   fe80::/64 dev eth0  proto kernel  metric 256   default via 2001:a100:1:bbff:ff:ff:ff:ff dev eth0  metric 1024  

munin graph of plugin if_err_eth0_day

enter image description here

How to redirect sitemap.xml used depending on the domain?Undo edits

Posted: 05 Apr 2021 10:00 PM PDT

How I can redirect sitemap.xml file access to different subfolders, if it can be reached from three different domains?

  • domain1/sitemap.xml -> domain1/es/sitemap.xml
  • domain2/sitemap.xml -> domain1/de/sitemap.xml
  • domain3/sitemap.xml -> domain1/uk/sitemap.xml

domain1, domain2, and domain3 target to the same folder.

Is it possible? How can I do this? Should do it with PHP ?

PS: The server is a linux running apache, the web platform is a wordpress.

Outlook 2007 login prompt repeat prompt and exchange 2010

Posted: 05 Apr 2021 09:01 PM PDT

It seems when I first setup a new user in Outlook 2007, a login prompt comes up and asks for credentials. After the account is setup, the login prompt will repeatedly prompt throughout the day and is a little annoying. This happened recently as all other machines are not doing this - just a few machines on the network (The new HP 8200 Elite small form factor machines to be exact) Outlook 2007 works perfect on the other HP models we have - HP 6000, Optiplex 330, Opti 320's. A few of the HP 8200's work and the new HP 8200's give us the prompt. Any help would be much appreciated.

Distributed and/or Parallel SSIS processing

Posted: 05 Apr 2021 08:03 PM PDT

Background: Our company hosts SaaS DSS applications, where clients provide us data Daily and/or Weekly, which we process & merge into their existing database. During business hours, load in the servers are pretty minimal as it's mostly users running simple pre-defined queries via the website, or running drill-through reports that mostly hit the SSAS OLAP cube.

I manage the IT Operations Team, and so far this has presented an interesting "scaling" issue for us. For our daily-refreshed clients, the server is only "busy" for about 4-6 hrs at night. For our weekly-refresh clients, the server is only "busy" for maybe 8-10 hrs per week!

We've done our best to use some simple methods of distributing the load by spreading the daily clients evenly among the servers such that we're not trying to process daily clients back-to-back over night. But long-term this scaling strategy creates two notable issues. First, it's going to consume a pretty immense amount of hardware that sits idle for large periods of time. Second, it takes significant Production Support over-head to basically "schedule" the ETL such that they don't over-lap, and move clients/schedules around if they out-grow the resources on a particular server or allocated time-slot.

As the title would imply, one option we've tried is running multiple SSIS packages in parallel, but in most cases this has yielded VERY inconsistent results. The most common failures are DTExec, SQL, and SSAS fighting for physical memory and throwing out-of-memory errors, and ETLs running 3,4,5x longer than expected. So from my practical experience thus far, it seems like running multiple ETL packages on the same hardware isn't a good idea, but I can't be the first person that doesn't want to scale multiple ETLs around manual scheduling, and sequential processing.

One option we've considered is virtualizing the servers, which obviously doesn't give you any additional resources, but moves the resource contention onto the hypervisor, which (from my experience) seems to manage simultaneous CPU/RAM/Disk I/O a little more gracefully than letting DTExec, SQL, and SSAS battle it out within Windows.

Question to the forum: So my question to the forum is, are we missing something obvious here? Are there tools out there that can help manage running multiple SSIS packages on the same hardware? Would it be more "efficient" in terms of parallel execution if instead of running DTExec, SQL, and SSAS same machine (with every machine running that configuration), we run in pairs of three machines with SSIS running on one machine, SQL on another, and SSAS on a third? Obviously that would only make sense if we could process more than the three ETL we were able to process on the machine independently.

Another option we've considered is completely re-architecting our SSIS package to have one "master" package for all clients that attempts to intelligently chose a server based off how "busy" it already is in terms of CPU/Memory/Disk utilization, but that would be a herculean effort, and seems like we're trying to reinvent something that you would think someone would sell (although I haven't had any luck finding it).

So in summary, are we missing an obvious solution for this, and does anyone know if any tools (for free or for purchase, doesn't matter) that facilitate running multiple SSIS ETL packages in parallel and on multiple servers? (What I would call a "queue & node based" system, but that's not an official term). Ultimately VMWare's Distributed Resource Scheduler addresses this as you simply run a consistent number of clients per VM that you know will never conflict scheduleing-wise, then leave it up to VMWare to move the VMs around to balance out hardware usage. I'm definitely not against using VMWare to do this, but since we're a 100% Microsoft app stack, it seems like -someone- out there would have solved this problem at the application layer instead of the hypervisor layer by checking on resource utilization at the OS, SQL, SSAS levels.

I'm open to ANY discussion on this, and remember no suggestion is too crazy or radical! :-) Right now, VMWare is the only option we've found to get away from "manually" balancing our resources, so any suggestions that leave us on a pure Microsoft stack would be great.

Thanks guys,

  • Jeff

What permissions should my website files/folders have on a Linux webserver?

Posted: 05 Apr 2021 09:37 PM PDT

This is a Canonical Question about File Permissions on a Linux web server.

I have a Linux web server running Apache2 that hosts several websites. Each website has its own folder in /var/www/.

/var/www/contoso.com/  /var/www/contoso.net/  /var/www/fabrikam.com/  

The base directory /var/www/ is owned by root:root. Apache is running as www-data:www-data. The Fabrikam website is maintained by two developers, Alice and Bob. Both Contoso websites are maintained by one developer, Eve. All websites allow users to upload images. If a website is compromised, the impact should be as limited as possible.

I want to know the best way to set up permissions so that Apache can serve the content, the website is secure from attacks, and the developers can still make changes. One of the websites is structured like this:

/var/www/fabrikam.com      /cache      /modules      /styles      /uploads      /index.php  

How should the permissions be set on these directories and files? I read somewhere that you should never use 777 permissions on a website, but I don't understand what problems that could cause. During busy periods, the website automatically caches some pages and stores the results in the cache folder. All of the content submitted by website visitors is saved to the uploads folder.

How can I save the counters’ setup in windows performance monitor

Posted: 05 Apr 2021 04:59 PM PDT

I need a comprehensive and complex set of performance counters in windows performance monitor. At this point every time that I use performance monitor, I have to add the counters, one by one. Is there any way to save the counter set and load it at the later use? Thank you,

No comments:

Post a Comment