Tuesday, January 18, 2022

Recent Questions - Server Fault

Recent Questions - Server Fault


How to load new Apache site without effecting existing sites?

Posted: 18 Jan 2022 08:45 AM PST

If I add a new Apache site at /etc/apache2/sites-enabled/newsite.conf, how to I load that new configuration without bringing down any other sites currently being hosted by Apache?

As far as I know, sudo apachectl graceful and sudo service apache2 restart and sudo service apache2 reload all cause a brief outage to all sites, with the outage being shorter for reload.

Strange/Unknown Sonicwall Audit Log - Download failed on various .exe

Posted: 18 Jan 2022 08:41 AM PST

Simple question, These log entries have started appearing daily on our Sonicwall, I have never seen anything like them before. My initial thoughts are that it's yet another bot out there searching for vulnerabilities, any insight you might have is appreciated, specifically what it is that they might be trying to exploit

Sonicwall Firmware: SonicOS Enhanced 6.5.4.9-92n

Logs in CSV:

AuditID Transaction_Id  Time                            Audit_Path                              group                   Index                   Description                             Old                             New                             Status          UUID                            User    Session Mode    Source                          Dest                            Interface  0       1       18:24:42 Jan 05 2022                            Download file           /scripts/cgi-bin/cbag/ag.exe    Failed                            146.70.38.12 (36825)    <our external address> (700)     X1  1       2       18:24:48 Jan 05 2022                            Download file           grn.exe Failed                            146.70.38.12 (44723)    <our external address> (700)     X1  2       3       18:24:50 Jan 05 2022                            Download file           ag.exe  Failed                            146.70.38.12 (50973)    <our external address> (700)     X1  3       4       18:24:54 Jan 05 2022                            Download file           /cgi-bin/cbag/ag.exe    Failed                            146.70.38.12 (55745)    <our external address> (700)     X1  4       5       18:24:56 Jan 05 2022                            Download file           db.exe  Failed                            146.70.38.12 (39315)    <our external address> (700)     X1  5       6       18:24:58 Jan 05 2022                            Download file           mw.exe  Failed                            146.70.38.12 (37489)    <our external address> (700)     X1  6       7       18:25:20 Jan 05 2022                            Download file           /scripts/cgi-bin/cbag/ag.exe    Failed                            146.70.38.12 (60097)    <our external address> (85)      X1  7       8       18:25:22 Jan 05 2022                            Download file           grn.exe Failed                            146.70.38.12 (44205)    <our external address> (85)      X1  8       9       18:25:23 Jan 05 2022                            Download file           ag.exe  Failed                            146.70.38.12 (59829)    <our external address> (85)      X1  9       10      18:25:25 Jan 05 2022                            Download file           /cgi-bin/cbag/ag.exe    Failed                            146.70.38.12 (51061)    <our external address> (85)      X1  10      11      18:25:25 Jan 05 2022                            Download file           db.exe  Failed                            146.70.38.12 (35567)    <our external address> (85)      X1  11      12      18:25:26 Jan 05 2022                            Download file           mw.exe  Failed                            146.70.38.12 (39315)    <our external address> (85)      X1  

Is there any reason that I shouldn't use --forcebadname to enable me to set My windows account username as my WSL username?

Posted: 18 Jan 2022 08:06 AM PST

I want to know if there is any reason not to do this, and if not, then how to do so. I'm only trying to allow a capital letter at the beginning of the username so that I can use my name as my Linux username.

How does Microsoft Exchange determine which mailbox database a mailbox is in upon authenticating via POP3?

Posted: 18 Jan 2022 08:15 AM PST

Performing a Microsoft Exchange 2010 to 2016 migration and everything is ready for transition besides this one problem.

I have an internal CNAME DNS record mail.domain.tld that is pointing to the Exchange 2010 server 192.168.0.10. The Exchange 2016 server is 192.168.0.20.

When I attempt to authenticate via POP3 on the Exchange 2016 server (using OpenSSL command 'openssl s_client -connect 192.168.0.20:995'), the server will authenticate me whether the users mailbox is on 2010 or 2016, and when I terminate the connection I know this because I get a response of '+OK Microsoft Exchange Server 2016 POP3 server signing off.' or '+OK Microsoft Exchange Server 2010 POP3 server signing off.', depending on where the mailbox resides.

When I attempt to authenticate via POP3 on the Exchange 2010 server I am only able to authenticate with mailboxes on the 2010 server, which I know is normal functionality.

However, when I change the internal CNAME DNS record mail.domain.tld that is pointing to the Exchange 2010 server 192.168.0.10 to the Exchange 2016 server 192.168.0.20, when I attempt to authenticate on the Exchange 2016 via POP3 for a mailbox on 2010, the server gives me an authentication error '-ERR Logon failure: unknown user name or bad password.'. I can only assume it is failing to determine which Exchange server the mailbox belongs to and is authenticating me for the Exchange 2016 server, not 2010.

Where can I check the configuration for Exchange 2016 to see how it is determining which mailbox database the mailbox belongs to? My best guess is that Exchange 2016 is seeing that the mailbox belongs to Exchange 2010 and is pointing to mail.domain.tld, thinking that is the 2010 server, when in fact it is the 2016 server and then giving me this authentication error as the mailbox isn't in this database.

Worth noting that on the Exchange 2016 EAC under Servers --> Databases that each mailbox database that is listed has the server FQDN as the server name, not the CNAME record mail.domain.tld

Where to install SSL Certificate for CURL request to external API

Posted: 18 Jan 2022 07:30 AM PST

This is a bit of a strange situation, but I'm hoping someone here can provide some assistance. I have a legacy Java application that communicates with an external 3rd party API (UPS online tools). We recently received a notification that we need to update our server certificate by January 21 or our transactions will no longer work.

Our application is sending ssl requests to the external API via a curl request (it HAS to be curl due to the way this application is designed, it's a long story and not really relevant here). What I need to know is, where do I need to install the certificate? We have a front-end web server (Apache), a jboss backend, and an HAProxy service in between. The curl request is being made by the backend via a groovy class executing a curl command. Which of those is the external API looking for a certificate on when doing an ssl handshake?

In case it helps, here is what the groovy method looks like:

public String[] requestTracking(String url, String action, String trackingNumber, String access_license_number, String user_id, String password) {            String request = """<?xml version="1.0"?>              <AccessRequest xml:lang="en-US">               <AccessLicenseNumber>${access_license_number}</AccessLicenseNumber>              <UserId>${user_id}</UserId>              <Password>${password}</Password>              </AccessRequest>              <?xml version="1.0"?>              <TrackRequest xml:lang="en-US">                  <Request>                  <TransactionReference>                      <CustomerContext>My Context</CustomerContext>                      <XpciVersion>1.0001</XpciVersion>                  </TransactionReference>                  <RequestAction>Track</RequestAction>                  <RequestOption>${action}</RequestOption>                  </Request>                  <ShipmentIdentificationNumber>${trackingNumber}</ShipmentIdentificationNumber>              </TrackRequest>  """;            def command = [              'sh',              '-c',              "curl -s -w '%{http_code}' '${url}' -X POST -d '" + request + "'"          ];            def proc = command.execute();          def outputStream = new StringBuffer();          def errorStream = new StringBuffer();          proc.waitForProcessOutput(outputStream, errorStream);          //      System.out.println("error: " + errorStream.toString());          String result = outputStream.toString().trim();          //split off the html status code          String code = result.substring(result.length() -3);          String body = result.substring(0,result.length() -3);            String[] output = [code, body];          return output;      }  

Is it possible to route to the gateway of a VPN in another private network?

Posted: 18 Jan 2022 07:29 AM PST

I am trying to send HTTP requests through VPN tunnel.

our server has two network interfaces. One has a private IP (10.10.10.12/24) and a public IP, the other only has a private IP (172.16.10.12/24), and a VPN is installed in 172.16.10.12/24.

I have received that the gateway of the VPN is 172.16.255.1 from the engineer who installed the VPN. so, i add routing rules like this.

sudo route add 192.168.10.2 gw 172.16.255.1 dev eth1

but result:

SIOCADDRT: Network is unreachable

Is it possible to route a traffic to gateway of the VPN?

Any info or direction appreciated. thanks.

SuperMicro PXE Boot through PCI NIC

Posted: 18 Jan 2022 07:28 AM PST

ServerFault.

I am facing an issue booting through PXE on a SuperMicro server.

The NIC that is supposed to boot through PXE is an Intel 10GE 82599ES SFI/SFP+, One server works well. But other servers that run on the same Hardware don't work.

Sometimes the Version of the Boot Agent isn't the same on other SuperMicro MotherBoards, the Boot agent doesn't detect the 10GE card. Does the card need to be inserted in a specific PCI slot to be recognized by the boot agent?

Clarification: The 10GE NIC is a secondary card, the main Card has a capacy of 1GE but it's always recognized by the Boot Agent.

Cloudfront caching resources even though response headers should prevent it

Posted: 18 Jan 2022 07:18 AM PST

I have recently setup a Cloudfront distribution with the following behaviour cache policy:

  • TTL settings:
    • Minimum TTL (seconds): 0
    • Maximum TTL (seconds): 31536000
    • Default TTL (seconds): 0
  • Cache Keys:
    • Headers - None
    • Cookies - None
    • Query strings - All

Unfortunately pages with no-cache response headers continue to cache the response at fairly low levels of concurrency. I used apachebench to run 100 requests with concurrency of 5, and received the following:

    100 Cache-Control: no-cache, no-store, must-revalidate, max-age=0       25 X-Cache: Hit from cloudfront       75 X-Cache: Miss from cloudfront  

I also captured what should be unique response headers that should be unique per request/response (given there are no request headers/cookies) and this also shows that there are duplicate Set-Cookie responses. For example, this response came back 4 times:

      4 Set-Cookie: csrftoken=h2uU7TKHJ6AicHgOIaJTwC5qIXJN4Zwf; Domain=.mysite.com; expires=Tue, 17-Jan-2023 15:10:37 GMT; Max-Age=31449600; Path=/  

I do have ways around this I believe, such as higher priority Cloudfront behaviours to set a no-cache policy, however it takes the power away from the server-side to decide whether a response should dynamically be cached, and indicates that Cloudfront is not honouring the server-side decision.

freeradius and openldap : vlan attribution working with radtest but not with wpa_supplicant

Posted: 18 Jan 2022 07:04 AM PST

Both of my services freeradius and openldap are on the same server. The schema Freeradius is loaded into openldap.

I configured the radiusProfileDN of a user to link to a group. In this group, I have radiusReplyAttribute set to give the informations of the vlan.

  • When I use the command radtest in local (or from the remote and already authenticated client), I recieve an Access-Accept packet (radius protocol) containing the information for the vlan. A wireshark capture show the information for the vlan is in the packet.
  LDAP + Radius                      LDAP + Radius ----- Switch ----- Client      <--------                              <-----------------------------      -------->              or              ----------------------------->     *vlan info*                                       *vlan info*  
  • When I use the tool wpa_supplicant (peap-gtc protocol), I authenticate with success but the client port is not added to the vlan group. A wireshark capture show the Access-Accept packet exchanged between the switch and the Radius server dont have the vlan information in it.
LDAP + Radius ----- Switch ----- Client    <------------------    <----------    ------------------>    ---------->      *no vlan info*      wpa_supplicant  

From the log of openldap, the same steps are made for the authentication with radtest or wpa_supplicant :

  1. read access allowed for radiusReplyAttribute on 'mygroup'
  2. result was in cache (radiusReplyAttribute)
  3. send_search_entry exit
  4. send_ldap_result & send_ldap_response

In the ldap server, I tried putting the vlan information directly in the user, or in the already made "variable" for the vlan info but I get the same result.

Do you know where my problem come from ? It seems related to wpa_supplicant using a different protocol than the radtest command and freeradius (maybe I miss a line in the configuration) ?

Cuda_kde_depth_packet_processor.cu:39:10: fatal error: helper_math.h: File or directory not found

Posted: 18 Jan 2022 06:59 AM PST

I want that my Kinect 2 is recognized as webcam on ubuntu 21.10 + nvidia driver 470.86 + cuda 11.4 :

marietto-BHYVE:/home/marietto# nvcc --version    nvcc: NVIDIA (R) Cuda compiler driver  Copyright (c) 2005-2021 NVIDIA Corporation  Built on Mon_May__3_19:15:13_PDT_2021  Cuda compilation tools, release 11.3, V11.3.109  Build cuda_11.3.r11.3/compiler.29920130_0    marietto-BHYVE:/home/marietto/Scrivania/libfreenect2# nvidia-smi    | NVIDIA-SMI 470.86       Driver Version: 470.86       CUDA Version: 11.4  

To accomplish the task I've followed this tutorial :

https://www.notaboutmy.life/posts/run-kinect-2-on-ubuntu-20-lts/

and I have issued the following commands :

git clone https://github.com/OpenKinect/libfreenect2.git  cd libfreenect2  mkdir build && cd build  cmake ..  

but at some point,I've got this error :

marietto-BHYVE:/home/marietto/Scrivania/libfreenect2/build# make    -- using tinythread as threading library  -- Could NOT find TegraJPEG (missing: TegraJPEG_INCLUDE_DIRS TegraJPEG_WORKS)   CMake Warning (dev) at /usr/share/cmake-3.18/Modules/FindOpenGL.cmake:305 (message):    Policy CMP0072 is not set: FindOpenGL prefers GLVND by default when    available.  Run "cmake --help-policy CMP0072" for policy details.  Use the    cmake_policy command to set the policy and suppress this warning.    FindOpenGL found both a legacy GL library:      OPENGL_gl_LIBRARY: /usr/lib/x86_64-linux-gnu/libGL.so    and GLVND libraries for OpenGL and GLX:      OPENGL_opengl_LIBRARY: /usr/lib/x86_64-linux-gnu/libOpenGL.so      OPENGL_glx_LIBRARY: /usr/lib/x86_64-linux-gnu/libGLX.so    OpenGL_GL_PREFERENCE has not been set to "GLVND" or "LEGACY", so for    compatibility with CMake 3.10 and below the legacy GL library will be used.  Call Stack (most recent call first):    CMakeLists.txt:269 (FIND_PACKAGE)  This warning is for project developers.  Use -Wno-dev to suppress it.  -- Linking with these libraries:    /usr/lib/x86_64-linux-gnu/libusb-1.0.so   pthread   va-drm   va   /usr/lib/x86_64-linux-gnu/libjpeg.so   /usr/lib/x86_64-linux-gnu/libturbojpeg.so.0   /usr/lib/x86_64-linux-gnu/libglfw.so   /usr/lib/x86_64-linux-gnu/libGL.so   /usr/lib/x86_64-linux-gnu/libOpenCL.so   /usr/lib/x86_64-linux-gnu/libcudart_static.a   Threads::Threads   dl   /usr/lib/x86_64-linux-gnu/librt.a  -- Could NOT find Doxygen (missing: DOXYGEN_EXECUTABLE)   -- Configurating examples  -- Using in-tree freenect2 target  -- Feature list:  --   CUDA    yes  --   CXX11    disabled  --   Examples    yes  --   OpenCL    yes  --   OpenGL    yes  --   OpenNI2    yes  --   TegraJPEG    no  --   Threading    tinythread  --   TurboJPEG    yes  --   VAAPI    yes  --   VideoToolbox    no (Apple only)  --   streamer_recorder    disabled  -- Configuring done  -- Generating done  -- Build files have been written to: /home/marietto/Scrivania/libfreenect2/build  [  4%] Built target generate_resources_tool  [  7%] Building NVCC (Device) object CMakeFiles/cuda_compile_1.dir/src/cuda_compile_1_generated_cuda_kde_depth_packet_processor.cu.o  /home/marietto/Scrivania/libfreenect2/src/cuda_kde_depth_packet_processor.cu:39:10: fatal error: helper_math.h: File o directory non esistente     39 | #include <helper_math.h>        |          ^~~~~~~~~~~~~~~  compilation terminated.  CMake Error at cuda_compile_1_generated_cuda_kde_depth_packet_processor.cu.o.RelWithDebInfo.cmake:220 (message):    Error generating    /home/marietto/Scrivania/libfreenect2/build/CMakeFiles/cuda_compile_1.dir/src/./cuda_compile_1_generated_cuda_kde_depth_packet_processor.cu.o  make[2]: *** [CMakeFiles/freenect2.dir/build.make:411: CMakeFiles/cuda_compile_1.dir/src/cuda_compile_1_generated_cuda_kde_depth_packet_processor.cu.o] Errore 1  make[1]: *** [CMakeFiles/Makefile2:194: CMakeFiles/freenect2.dir/all] Errore 2  make: *** [Makefile:149: all] Errore 2  

it can't continue the compilation because It can't find the file helper_math.h in the proper place. At this point I'm confused. I don't know where can I get that file and where should I place it. I imagine that I should install the cuda samples and I tried,as u can see below,getting the cuda package for ubuntu 20.04 (im running 21.10,so I've thought that 20.04 was good,since it is the closest to my ubuntu version),and I've deselected everything except the samples,but it didn't work :

marietto-BHYVE:/home/marietto/Scrivania# chmod +x cuda_11.6.0_510.39.01_linux.run    marietto-BHYVE:/home/marietto/Scrivania# ./cuda_11.6.0_510.39.01_linux.run    ===========  = Summary =  ===========  Driver:   Not Selected  Toolkit:  Installed in /usr/local/cuda-11.6/  Please make sure that   -   PATH includes /usr/local/cuda-11.6/bin   -   LD_LIBRARY_PATH includes /usr/local/cuda-11.6/lib64, or, add /usr/local/cuda-11.6/lib64 to /etc/ld.so.conf and run ldconfig as root  To uninstall the CUDA Toolkit, run cuda-uninstaller in /usr/local/cuda-11.6/bin  ***WARNING: Incomplete installation! This installation did not install the CUDA Driver. A driver of version at least 510.00 is required for CUDA 11.6 functionality to work.  To install the driver using this installer, run the following command, replacing <CudaInstaller> with the name of this run file:      sudo <CudaInstaller>.run --silent --driver  Logfile is /var/log/cuda-installer.log  

as a further experiment,I tried to install the samples from the cuda version below :

http://developer.download.nvidia.com/compute/cuda/10.1/Prod/local_installers/cuda_10.1.243_418.87.00_linux.run

but :

[INFO]: Driver installation detected by command: apt list --installed | grep -e nvidia-driver-[0-9][0-9][0-9] -e >  [INFO]: Cleaning up window  [INFO]: Complete  [INFO]: Checking compiler version...  [INFO]: gcc location: /usr/bin/gcc  [INFO]: gcc version: gcc version 10.3.0 (Ubuntu 10.3.0-11ubuntu1)  [ERROR]: unsupported compiler version: 10.3.0. Use --override to override this check.  

what should I do ? thanks.

Make a new domain or add path to existing domain [closed]

Posted: 18 Jan 2022 06:29 AM PST

We recently added a new API in our infrastructure and I was asked if we should add a new path like so https://api.example.com/api2 or make a totally new domain like https://api2.example.com/ .

I was wondering what would be the best practices for this.

PHP8.1 fatal error 500 - log not working

Posted: 18 Jan 2022 05:56 AM PST

since we updated to PHP 8.1, unfortunately not all PHP 500 errors are logged anymore.

I activated logging via catch_workers_output = yes

Most errors are logged, only some - probably very bad ones - are not logged. With PHP 7.4 it all worked.

Example - this error is not logged

$x = 3;  $y = is_null($x) ? null : trim($x) ?: null;  

Route private network traffic using public ip/network

Posted: 18 Jan 2022 06:07 AM PST

Newbie here, scenario I have 2 servers with 1 public and 1 private IP each.

  serverA : eth0 (54.173.62.149 | public ip)              privateip 10.38.1.1/24      serverB: eth1(13.33.152.13 |public ip)             privateip 10.48.1.1/24  

Is it possible to route the private ip traffic through public interface ? kindly provide suggestion/links on approaches for such cases

./node_exporter: cannot execute binary file

Posted: 18 Jan 2022 05:55 AM PST

I am using RHEL7.9 in my Virtual Box. I installed the binary file of node_exporter as explained in the official documentation. I tried to run node_exporter using the following command:

./node_exporter  

but it shows me the following error, instead of the output of the documentation:

ERROR: node_exporter: cannot execute binary file

How to solve it and why is this happening in my machine?

Official Documentation that I followed: https://prometheus.io/docs/guides/node-exporter/

commands in documentation

Best way to enable DDoS protection on many individual GCP compute instances without load balancing?

Posted: 18 Jan 2022 08:38 AM PST

I've been scouring through the Google Cloud Armor docs for information about DDoS protection of a GCP compute VM instance. From what I've found, Google Cloud Armor Managed Protection provides traditional DDoS protection (perhaps layer 3 and layer 4), and it must be attached to a load balancer. Additionally, Google Cloud Armor Adaptive Protection provides layer 7 protection via machine-learning-based anomaly detection in network traffic, and it must be applied via a Google Cloud Armor security policy which in-turn must be attached to a load balancer.

But load balancers are associated with an instance group rather than a single VM instance, and they're intended for autoscaling instances (e.g. based on an instance template), rather than for a single VM instance. I'm running several independent stateful server applications (each in their own VM instance) in which autoscaling is not really an option.

I could define a one-instance instance group (i.e. with autoscaling rules set to spawn exactly 1 instance). However, I have multiple individual VM instances that I want DDoS protection on, so I'd need an instance group for each one of these, and a load balancer for each instance group. This would get very expensive very quickly.

A more practical option would be to set up a single auto-scaled instance group attached to a load balancer to simply serve as a reverse proxy for all of the other servers, which are accessed internally within a common VPC. That reverse proxy could be attached to a load balancer and provided DDoS protection as the single point of entry.

But it seems strange to me that load balancing is required for DDoS protection on GCP to begin with. After all, AWS shield does not require a load balancer to take effect. Am I missing something?

Maximun amount of IP-addresses per load balancer in Google Cloud

Posted: 18 Jan 2022 08:08 AM PST

What is the maximum amount of Ip addresses (SSL enabled domains) per Google Cloud load balancer? I'm trying to find out how much we can scale our application on one load balancer setup. Or will the load balancer slow down because of a large number of different domains (with SSL certificates) attached to it?

I noticed that there is a limitation of 15 SSL certificates / Ip-address and a limit of 100 HTTPS proxies on one project. I think that 100 limitation can be grown by asking from google, but does it make sense?

Our current project has now almost 100 domains (with SSL certs) attached to one load balancer and we have plans to host thousands of hosts within the same application. Should we consider another kind of approach or set up multiple load balancers?

Windows Server 2019 Missing NTFS Permissions

Posted: 18 Jan 2022 08:08 AM PST

Following scenario is happening right now:

Each user has a Shared drive mapped to their account and a general Drive

John Doe has the following drives available:

\\\server\johndoe    \\\server\shared-company  

The issue being, new files from programs are not displayed due to missing NTFS permissions, they are just gone in the security tab, I noticed it in Corel and Excel.

Second, it's not a new file issue. Manually creating a new file assigns it the correct permissions (via explorer on the client pc).

Owner of the Shared Folder (C:\Share\shared-company) is a custom Admin Group as well, with full access.

The user itself only has "Modify", and I made sure they have the permissions to change the permissions.

The parent folder (C:\Share) has read only permissions for domain users, as well as full permissions for admins.

The Audit log shows literally nothing on file creation, only when I manually take ownership of the files, the blank permissions won't show up there.

My Question being: How do I stop files losing their NTFS permissions? It's driving me Insane that I have to take ownership upon creation of an Excel or Corel file, I haven't tested it with any other program.

Is there anything I am missing permission wise?

I am using Windows Server 2019 Standard, this problem occurred only recently without any change, worked fine before. 2 Computers are affected by now

redirecting to a different domain via haproxy

Posted: 18 Jan 2022 08:08 AM PST

Using haproxy 2.1.4

I am having an issue with ACL's and redirect.

I want to ACL on ww.abcdomain.com/x/aa and redirect that to www.defdomain.com/aa

Apart from these, there are the following operations:

"www.abcdomain.com/x/aa >> www.defdomain.com/aa"    "www.abcdomain.com/x/bb >> www.defdomain.com/bb"    "www.abcdomain.com/x/cc >> www.defdomain.com/cc"  

snmp server not receives data

Posted: 18 Jan 2022 08:54 AM PST

enter code hereI have a problem when using the snmp and the server receives data, The service is running correctly and the port is listening

I do a snmpwalk -v 2c -c mycommunity 192.168.1.82 (which is the same) and it answers me and a snmpwalk -v 2c -c mycommunity localhost and also, that is, the service is working but it does not respond from any machine other than her same a sudo netstat -tulpn | grep snmp sudo netstat -tulpn | grep snmp udp 0 0 0.0.0.0:161 0.0.0.0:* 15014/snmpd

something similar happened to someone?

>     #iptables -L  >     Chain INPUT (policy DROP)  >     target     prot opt source               destination  >     ACCEPT     all  --  anywhere             anywhere  >     ACCEPT     all  --  anywhere             anywhere             state RELATED,ESTABLISHED  >     ACCEPT     icmp --  anywhere             anywhere  >     ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:https  >     ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:http  >     ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:ssh  >     ACCEPT     udp  --  anywhere             anywhere             udp dpt:ntp  >     ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:domain  >     ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:7777  >     ACCEPT     udp  --  anywhere             anywhere             udp dpt:domain  >     ACCEPT     udp  --  anywhere             anywhere             udp dpt:7777  >     ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:6669  >     ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:kshell  >     ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:kpasswd  >     ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:kerberos-adm  >     ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:kerberos  >     ACCEPT     udp  --  anywhere             anywhere             udp dpt:kpasswd  >     ACCEPT     udp  --  anywhere             anywhere             udp dpt:kerberos  >     ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:ldap  >     ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:ldaps  >     ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:7389  >     ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:7636  >     ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:6670  >     ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:nrpe  >     ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:sunrpc  >     ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:nfs  >     ACCEPT     tcp  --  anywhere             anywhere             tcp dpts:32765:32769  >     ACCEPT     udp  --  anywhere             anywhere             udp dpt:sunrpc  >     ACCEPT     udp  --  anywhere             anywhere             udp dpt:nfs  >     ACCEPT     udp  --  anywhere             anywhere             udp dpts:32765:32769  >     ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:11212  >     ACCEPT     tcp  --  172.17.0.0/16        anywhere             tcp dpt:mysql  >     ACCEPT     tcp  --  172.16.0.0/16        anywhere             tcp dpt:mysql  >     REJECT     all  --  anywhere             anywhere             reject-with icmp-port-unreachable  >       >     Chain FORWARD (policy ACCEPT)  >     target     prot opt source               destination  >     DOCKER-USER  all  --  anywhere             anywhere  >     DOCKER-ISOLATION-STAGE-1  all  --  anywhere             anywhere  >     ACCEPT     all  --  anywhere             anywhere             ctstate RELATED,ESTABLISHED  >     DOCKER     all  --  anywhere             anywhere  >     ACCEPT     all  --  anywhere             anywhere  >     ACCEPT     all  --  anywhere             anywhere  >       >     Chain OUTPUT (policy ACCEPT)  >     target     prot opt source               destination  >     ACCEPT     all  --  anywhere             anywhere  >       >     Chain DOCKER (1 references)  >     target     prot opt source               destination  >     ACCEPT     tcp  --  anywhere             172.17.0.2           tcp dpt:http-alt  >       >     Chain DOCKER-ISOLATION-STAGE-1 (1 references)  >     target     prot opt source               destination  >     DOCKER-ISOLATION-STAGE-2  all  --  anywhere             anywhere  >     RETURN     all  --  anywhere             anywhere  >       >     Chain DOCKER-ISOLATION-STAGE-2 (1 references)  >     target     prot opt source               destination  >     DROP       all  --  anywhere             anywhere  >     RETURN     all  --  anywhere             anywhere  >       >     Chain DOCKER-USER (1 references)  >     target     prot opt source               destination  >     RETURN     all  --  anywhere             anywhere  

RHEL 8 Registration Failed: SSL: UNSAFE_LEGACY_RENEGOTIATION_DISABLED

Posted: 18 Jan 2022 07:46 AM PST

I am trying to stand up a minimal RHEL 8 server on VMware Fusion with RHEL Developer creds. When I attempt to run the command:

subscription-manager register --username my_username --password my_password  

I receive the following error:

Unable to verify server's identity: [SSL: UNSAFE_LEGACY_RENEGOTIATION_DISABLED] unsafe legace renegotiation disabled (_ssl.c:897)  

I am assuming this is because my organization is performing SSL inspection and breaking the cert. I have gone to the config file "/etc/rhsm/rhsm.conf" and changed the insecure flag to "1" (which is supposed to disable certificate verification)

Not sure what I am doing wrong here. Any thoughts on what else I need to do to get this to go through??

WDS stopped working - no servers under Windows Deployment Services 2008 R2

Posted: 18 Jan 2022 08:04 AM PST

I have three Windows 2008 R2 servers with MDT and WDS installed. It was reported multicasting stopped working on all of them. I made sure the multicast option was selected in the deployment shares' properties, but when I went to Windows Deployment Services in server manager, there were no servers showing there. When I tried to add local computer I got the "A directory service error has occurred" error.

The WDS service was running, though, and I was able to restart it, but it didn't help. I rebooted the server, then uninstalled WDS, rebooted again and reinstalled it, rebooted once more. WDS appeared again, but there was still no server under the servers node and I still couldn't add any getting the same error. Also, after reinstalling WDS, the WDS service won't start at all now.

I tried running the below commands, but they failed throwing the "directory service error", I just because there was no server added:

wdsutil /uninitialize-server wdsutil /initialize-server /reminst:[PATH_TO_REMOTEINSTALL_DIRECTORY]

I enabled tracing and when trying to start the WDS service I get a bunch of errors in WDSServer.log, but I believe they might be irrelevant, and all boil down to no server showing under the "Servers" node. I found a similar thread here https://social.technet.microsoft.com/Forums/windows/en-US/265b4b53-63ac-491f-817c-6030daa39b81/cant-start-quotwindows-deployment-servicesquot-service?forum=itprovistadeployment, but the suggested solutions don't work for me, as explained above.

It puzzles me that all three servers lost the WDS functionality, can it be something related to AD? I made sure domain, DNS servers etc. are pingable and the computer accounts have necessary privileges set in AD.

I searched the internet high and low, but couldn't find any information on exactly such an issue, so any help will be greatly appreciated.

Disk Is Running Full - While Snapshotting, Coincidence Or Possible Cause?

Posted: 18 Jan 2022 06:21 AM PST

My disk is running full so I checked server logs and largest files and everything is ok. IOTOP also fine. I deleted 1 GB of files for testing purposes and it takes a minute until the disk runs full again so something is clearly writing although I cant see what via iotop.

Inodes df -ih is also fine - not completely used

Here is a picture. Strangely it says that only 318GB out of 335GB is USED but 0% is available:

Picture of disk usage

Then I checked the control panel and the VPS is currently snapshotting. Can a snapshot feature use disk space as temporary storage or is that running entirely on external drives?

Thank you for your input, much appreciated

Reverse proxy redirect traffic ssl "ex: openvpn"

Posted: 18 Jan 2022 07:01 AM PST

With netscaler, I can redirect all traffic SSL to specific host which depending their subdomains.

Example:

                                             +-------------+                                     +-------> |webserver 443|                                     |         +-------------+  +----------+        +--------------+       www.example.com:443  | internet | +----> | reverseproxy |  +----------+        +--------------+                                     |         +-----------+                                     +-------> |openvpn 443|                                               +-----------+                                            vpn.example.com:443  

The traffic is just redirected and it not unencrypted because we have not configure any certificate on Netscaler. We have just one certificate "wildcard" for the reverse proxy.

I want to say that I have not configured NetScaler. So, it is possible I'm wrong on the configuration.

Question:

  1. I would to know if it is possible to do the same with an opensource software like Nginx or Squid?
  2. How does it work this configuration?

WS2012R2 RemoteApp Server refuses to remember password

Posted: 18 Jan 2022 08:04 AM PST

I have a problem that's puzzling me and I can't quite put my finger on it.

We have two VMs, running Windows Server 2012 R2, both configured with Remote Desktop Services and running the RemoteApp server for our employees to connect to.

Both are configured identically, just with different programs on each. They get their GPOs from the domain server, so any changes I make in the local gpedit.msc get overwritten shortly after anyway.

One of the two servers remembers passwords after you log in for the first time, but the other does not. I can remember the password in Remote Desktop (mstsc.exe) if I uncheck "always ask for credentials", but when I try to run the deployed RemoteApps (each is its own .rdp file, as those who use RemoteApps probably know) it still prompts each time I initiate the session.

What puzzles me is that both of them have the same Group Policy settings! I also looked in Server Manager and confirmed that all the Remote Desktop Services settings matched as well... there's nothing in there telling it to not remember passwords. "Prompt for credentials on the client computer" is not configured, for example.

Any other ideas what might be causing this? We thought it might have to do with which is set as the default RemoteApp server (in Windows 8 and 10), but that did not help - even setting the second as the default (which makes it so users cannot modify or remove it from Control Panel -> RemoteApp and Desktop Connections) doesn't fix the problem, and it still prompts for a password every logon.

nginx: Multiple cross-domain 301 redirects with different page addresses

Posted: 18 Jan 2022 07:01 AM PST

I'm moving my old site over to a new domain, and with that new domain comes new naming conventions. I'm trying to figure out what would be the simplest way of accomplishing the following for roughly 8 different pages:

  1. http to https
  2. Different domain
  3. Redirect (1) old www and (2) old non-www addresses, plus (3) new non-www address to new www address

Here are two old pages from the old domain:

Portfolio:    http://dcturanoinc.com/?dct=portfolio_expediting  http://www.dcturanoinc.com/?dct=portfolio_expediting    Services:    http://dcturanoinc.com/?dct=services_expediting  http://www.dcturanoinc.com/?dct=services_expediting  

Here are two new pages from the new domain:

Services:    https://dcturano.com/services/  https://www.dcturano.com/services/    Portfolio:    https://dcturano.com/portfolio/  https://www.dcturano.com/portfolio/  

EDIT: This is my nginx.conf file as it currently stands.

server {      listen 80;      listen [::]:80;      listen 443 default_server ssl;        server_name dcturano.com www.dcturano.com;        if ($scheme = http) {          return 301 https://$server_name$request_uri;      }  

How do I set up disk quotas over LDAP on CentOs?

Posted: 18 Jan 2022 06:00 AM PST

I've been google-ing for some time and I haven't been able to find any resources or hints on the subject.

I am wondering if it is possible to do so, if so how? Any nudge in right direction will be appricated.

I do know that if you download and install "Linux Quota" from source, you'll get some perl scripts which are supposed to aid with the matter. But there is as far as I know absolutely no good documentation to help you along the way.

I am also running a NFS server from the same machine.

Note: This is for a university assignment, so I might be totally stupid for asking this question. I am trying to explore the options. If there is a better way of solving this, please do tell.

Edit: Here is a link to the site of Linux Quota. They do include a LDAP schema, so it should be possible.

Time or date difference using remote desktop

Posted: 18 Jan 2022 07:39 AM PST

When remoting into 2008 R2 we are getting this message.

Remote Desktop cannot verify the identity of the remote computer because there is a time or date difference between your computer and the remote computer. Make sure your computer's clock is set to the correct time, and then try connecting again.

I have checked the server and the time is correct.

Checking the event logs it is saying The RPC server is unavailable I'm not sure if this is related.

Additional note: We have Nagios monitoring, and it has reported Result from smbclient not suitable.

Problems with starting Uniform Server

Posted: 18 Jan 2022 06:00 AM PST

everyone. I am having a serious problem with accessing the Uniform Server that I had installed sometime ago to build a web database. The last time I had tried to start the server it, I was successful. I just tried to start the server again just a little while ago and the browser said the link appears to be broken. Can somebody help me fix this please? Any help would be much appreciated.

Listen to UDP data on local port with netcat

Posted: 18 Jan 2022 08:37 AM PST

netcat -ul -p2115 fails with a usage statement.

What am I doing wrong?

How to use a config file (ini, conf,...) with a PowerShell Script?

Posted: 18 Jan 2022 08:32 AM PST

Is it possible to use a configuration file with a PowerShell script?

For example, the configuration file:

#links  link1=http://www.google.com  link2=http://www.apple.com  link3=http://www.microsoft.com  

And then call this information in the PS1 script:

start-process iexplore.exe $Link1  

No comments:

Post a Comment