Thursday, February 10, 2022

Recent Questions - Server Fault

Recent Questions - Server Fault


Files remaining locked after writing using mount.cifs

Posted: 10 Feb 2022 08:32 AM PST

We have a server running RHEL 8.3 and connecting to a backup server running windows server running windows server 2016. We connect from RHEL to server using the command: mount.cifs \\SMCFILE\SMC$\Data$\picksaves /mnt/smcbackups -o user=user,pass="pwd",uid=uidvalue,gid=gidvalue,file_mode=0777,dir_mode=0777

We use that same command for other server running the same os, but just copy files around.

When doing the backups, sometimes the file will remain locked. I suspect it has something to do with the file being held open.

If you log into the windows machine, it will say the file is locked by PID 4 or something. If I disconnect the mount, it does not release it, so there must be a service handling the transfer(smb?) or something locking the file and will not release it until we kill it.

Is what we are doing the current proper way to mount a windows server file system on linux?H

How to Copy and Paste text from Virtual machine to Host OS [CentOS to Windows 11]

Posted: 10 Feb 2022 08:30 AM PST

Error: failed to download metadata for repo 'appstream': Cannot prepare internal mirrorlist : No URLs in mirrorlist enter image description here

Postfix sender_bcc_maps / ignore a specific user

Posted: 10 Feb 2022 08:16 AM PST

I have sender_bcc_maps set up and working with postfix so that outgoing mail gets bcc'd to the sender's address. My configuration is more or less:

main.cf

sender_bcc_maps = regexp:/etc/postfix/regexp_sender_bcc  

regexp_sender_bcc

/^([^@]+)@[a-zA-Z0-9_]+\.[a-zA-Z0-9_]+$/ $1@example.com  

This works great. Now I have an email address nobody@example.com that I'd like to exclude from this configuration.

Idea 1: I first tried adding a row to route to a non-existent mailbox

/^nobody.*$/ nobody@example.com  

Predictably, this results in an attempt to bcc that address and a bounceback since it doesn't exist.

Idea 2: Next I tried simply changing the target to an empty string:

/^nobody.*$/  

This generates a warning and completely drops the outgoing mail as well:

warning: sender_bcc_maps lookup of nobody@example.com returns an empty string result  warning: sender_bcc_maps should return NO RESULT in case of NOT FOUND  warning: sender_bcc_maps map lookup problem -- message not accepted, try again later  

Idea 3: I then tried routing this mail to a local user:

/^nobody.*$/ nobody@localhost  

This somewhat does the job but then all these messages are still delivered to the local mail system.

I'd like to do either one of these - preferring the former:

  • configure sender_bcc_maps to completely ignore a specific sender's address
  • configure postfix to completely discard mail to a specific user

What really DB server is? Like is they are normal server computers [closed]

Posted: 10 Feb 2022 07:35 AM PST

Its my frist server setup, so i do not know much about servers and related stuff. If anything is wrong in this question please comment it out.

My question is,

What really DB server is? Like is they are normal server computers like 'Dell PowerEdge r510' with lots of storages? Or it is somthing else

Let me explain, I basically want to know what hardwares do companies use for DB?

Proxy PUT Requests - Apache Configuration

Posted: 10 Feb 2022 07:23 AM PST

I am trying to redirect PUT request for a specific endpoint to another host.

The said endpoint resides under /internal and accepts only PUT requests. The other endpoints under /internal will continue to be served by my main host/server.

I have tried setting it up using both rewrite rules and using the proxy ([P]) flag and using the ProxyPass directive - all resulting in a 500 Internal Server Error and the request never makes it to the new host

My client application uses a simple REST client that cannot handle redirects, so I have to use some kind of proxying.

Apache logs show the following

[Thu Feb 10 08:56:20.394444 2022] [rewrite:trace1] [pid 8579] mod_rewrite.c(480): [client XXX.XXX.XXX.XXX:XXXXX] XXX.XXX.XXX.XXX - - [subdomain1.mydomain.com/sid#55d4ed07ecb0][rid#55d4ed2c5f20/initial] go-ahead with proxy request proxy: https://subdomain2.mydomain.com/internal/my-endpoint [OK]  

Here's the current configuration for the specific vhost

<VirtualHost *:80>   ServerName subdomain1.mydomain.com    ProxyPass /soap ajp://localhost:7007/soap retry=3    ProxyPreserveHost On    Redirect /  https://subdomain1.mydomain.com/    ErrorLog /var/log/httpd/subdomain1_error  </VirtualHost>    <VirtualHost *:443>    ServerName subdomain1.mydomain.com    Options FollowSymlinks    ProxyRequests On    ProxyPreserveHost On    #RewriteEngine On      #RewriteCond %{REQUEST_URI} '^/internal/my-endpoint'    #RewriteCond %{REQUEST_METHOD} ^(PUT)    #RewriteRule "^/(.*)" "https://subdomain2.mydomain.com/internal/my-endpoint" [P]      ProxyPass /internal/my-endpoint https://subdomain2.mydomain.com/internal/my-endpoint    ProxyPassReverse /internal/my-endpoint https://subdomain2.mydomain.com/internal/my-endpoint    ProxyPreserveHost On      LogLevel alert rewrite:trace3    CustomLog /var/log/httpd/subdomain1_access_log common    ProxyPass / ajp://localhost:7007/ retry=3    ProxyPassReverse / ajp://localhost:7007/ retry=3    ProxyPreserveHost Off    ErrorLog /var/log/httpd/subdomain1    SSLEngine on  </VirtualHost>  

Does roblox, discord, steam and other sites work on FreeBSD? [closed]

Posted: 10 Feb 2022 06:34 AM PST

Could you install roblox on Freebsd? Discord and steam? ............................................................................................................................................................................................................................. ........................................................................................................................................................................................................................... ........................................................................................................................................................................................................................... ........................................................................................................................................................................................................................... ........................................................................................................................................................................................................................... ........................................................................................................................................................................................................................... ........................................................................................................................................................................................................................... ........................................................................................................................................................................................................................... ........................................................................................................................................................................................................................... ........................................................................................................................................................................................................................... ........................................................................................................................................................................................................................... ........................................................................................................................................................................................................................... ...........................................................................................................................................................................................................................

Why can I write one set of URLs but not others with NGINX?

Posted: 10 Feb 2022 08:11 AM PST

I've set up NGINX, version 1.18.0, as reverse proxy for my Apache Superset 1.4.0 installation.

I'm trying to capture some URL patterns, and rewrite them by adding standalone=1 at the end.

The following NGINX configuration works as expected:

location /superset/explore/ {          if ($args ~* "(.*?)slice_id%22%3A133(.*)$") {              rewrite ^/superset/explore/(.*)$ /superset/explore/$1?standalone=1 break;          }            proxy_pass http://127.0.0.1:8087;          proxy_set_header Host $host;          proxy_set_header X-Real-IP $remote_addr;          proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;          proxy_set_header X-Forwarded-Proto $scheme;      }  

Because when I visit (with Chrome) a URL such as http://192.168.239.40:8088/superset/explore/?form_data=%7B%22viz_type%22%3A%22echarts_timeseries_line%22%2C%22datasource%22%3A%2233__table%22%2C%22slice_id%22%3A133%2C ..., I can see that it's replaced by the original plus &standalone=1 added to the URL when I check Chrome address bar.

But when I try to do something similar for another URL pattern for Apache Superset, such as the following:

   location /dashboard/list/ {          rewrite ^/dashboard/list/(.*)$ /dashboard/list/$1?standalone=1 break;            proxy_pass http://127.0.0.1:8087;          proxy_set_header Host $host;          proxy_set_header X-Real-IP $remote_addr;          proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;          proxy_set_header X-Forwarded-Proto $scheme;      }  

and I request http://192.168.239.40:8088/dashboard/list/ with Chrome, I see the address bar replaced with http://192.168.239.40:8088/dashboard/list/?pageIndex=0&sortColumn=changed_on_delta_humanized&sortOrder=desc&viewMode=table but I don't see any any &standalone=1 appended.

I also checked Superset logs to see what it serves after I request http://192.168.239.40:8088/dashboard/list/ and I see that ?standalone=1 is actually appended!

Feb 10 14:09:19 dashboard-server python[34169]: 2022-02-10 14:09:19,482:INFO:werkzeug:127.0.0.1 - - [10/Feb/2022 14:09:19] "GET /dashboard/list/?standalone=1 HTTP/1.0" 200 -  Feb 10 14:09:20 dashboard-server python[34169]: 2022-02-10 14:09:20,729:INFO:werkzeug:127.0.0.1 - - [10/Feb/2022 14:09:20] "GET /api/v1/dashboard/_info?q=(keys:!(permissions)) HTTP/1.0" 200 -  Feb 10 14:09:20 dashboard-server python[34169]: 2022-02-10 14:09:20,771:INFO:werkzeug:127.0.0.1 - - [10/Feb/2022 14:09:20] "GET /api/v1/dashboard/?q=(order_column:changed_on_delta_humanized,order_direction:desc,page:0,page_size:25) HTTP/1.0" 200 -  

Any ideas why this is happening?

The complete /etc/nginx/conf.d/superset.conf is as follows:

server {      listen 8088;      server_name 192.168.239.40;        location / {          proxy_pass http://127.0.0.1:8087;      }        location /superset/explore/ {          if ($args ~* "(.*?)slice_id%22%3A133(.*)$") {              rewrite ^/superset/explore/(.*)$ /superset/explore/$1?standalone=1 break;          }            proxy_pass http://127.0.0.1:8087;          proxy_set_header Host $host;          proxy_set_header X-Real-IP $remote_addr;          proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;          proxy_set_header X-Forwarded-Proto $scheme;      }       location /dashboard/list/ {          rewrite ^/dashboard/list/(.*)$ /dashboard/list/$1?standalone=1 break;            proxy_pass http://127.0.0.1:8087;          proxy_set_header Host $host;          proxy_set_header X-Real-IP $remote_addr;          proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;          proxy_set_header X-Forwarded-Proto $scheme;      }          # required as superset has hardcoded base path urls      location /static/ {          proxy_pass http://127.0.0.1:8087/static/;      }        # to expose a specific dashboard using a custom url      # the below example will make dashboard 2 available in standalone mode      # on $host/dashboards/my-dashboard      location /dashboards/my-dashboard {          proxy_pass http://127.0.0.1:8087/superset/dashboard/2/?standalone=true;      }  }  

How to solve connectivity problems stemming from computers with the same name in the same domain?

Posted: 10 Feb 2022 06:06 AM PST

I made the mistake once of using a laptop on a local network that has the same computer name ('Computer12') as one of the computers on the domain. But now even weeks of removing that laptop from the network, i cant always remotely connect to 'Computer12' using Computer Name. I can connect fine using IP address but not the computer name. I am thinking there is a confusion to which 'Computer12' the connection is connecting. The same thing happens when i am trying to access the files remotely from Windows explorer i get prompted to re-enter the credentials even though i had previously clicked save.

Is there a way to reset the cache on the network to forget that one instance of 'Computer12', I am really trying to avoid renaming the computer.

kubernetes k3s prometheus no node-exporter on the master node

Posted: 10 Feb 2022 05:56 AM PST

I have deployed k3s cluster (1 master and 2 agents (workers)) on the Proxmox server. Deployed Prometheus based on the helm-charts/kube-prometheus-stack .

node-exporters have been deployed only on the worker nodes.

Prometheus fires the following alerts:

KubeControllerManagerDown description: KubeControllerManager has disappeared from Prometheus target discovery.  KubeProxyDown description: KubeProxy has disappeared from Prometheus target discovery.  KubeSchedulerDown description: KubeScheduler has disappeared from Prometheus target discovery.  

Why is a node-exporter missing on the master node?

I have installed a k3s master with the --node-taint option. Can it be the reason for this issue?

Registry Key disappearing on reboot after added to read application and services logs via WMI

Posted: 10 Feb 2022 05:49 AM PST

I have a Windows Server 2019 VM and am trying to collect some specific Windows Event Logs using Get-WmiObject

In order to read an Event Logs channel in Applications and Services, I created a registry key and configured it similar to how this post describes the process. This worked, but when server reboots, the registry key I created disappears. This happens on a brand new image, so I can't tell if there is something specific that is rewriting HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\eventlog\ on a reboot or something else. I haven't been able to locate any documentation which would give the answer. Is there something I can adjust or a standard pattern to recreate the keys on boot?

Thanks!

tcp client which makes many outgoing connections

Posted: 10 Feb 2022 07:14 AM PST

My TCP client need to connect to thousands servers (in local network) all at once. Communication is simple. 10 bytes string TCP request, 20 bytes response.

I have gigabit network.

when client connects servers one by one no issues. But when all at once I have success with 1000 servers and others failing with errors: normally no route to host.

My clients running on brandnew NUC with i7. I was tweaking TCP stack :

sysctl -w fs.file-max=100000  mtu 500, 3000, 9000  ulimit -n 32000 2000 3000 5000   

But no success. Best I managed to get is 1800 connections Do u know how to overcome this issue?

Kuberentes kubelet-client-current.pem expired

Posted: 10 Feb 2022 05:05 AM PST

I'm new in K8s and I'm facing a problem with certificate. 1.13 version is used. One of the worker nodes is in NotReady status. I check logs and it turned out that:

Part of the existing bootstrap client certificate is expired
Using bootstrap kubeconfig to generate TLS client cert, key and kubeconfig file
Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem"
Failed to connect to apiserver: the server has asked for the client to provide credentials

I checked and indeed kubelet-client-current.pem is pointing to expired certificate. My question is how to renew this kubelet-client-<current_date>.pem file.

EDIT:
I read that for this current cert: The initial validity period is one year. When the certificate is about to expire, it is automatically renewed and the validity period is extended by one year. Is there any way to create it manually? Or how to force kubeadm to do it?

I also deleted /var/lib/kubelet/pki and after kubelet restart it was recreated but only with kubelet-client.key.tmp, kubelet.crt, kubelet.key.

how to disable lines in java.security for linux and windows

Posted: 10 Feb 2022 05:42 AM PST

I need to disable the following lines in the java.security file (java 8 SE)

jdk.tls.disabledAlgorithms=SSLv3, TLSv1, TLSv1.1, RC4, DES, MD5withRSA, \     DH keySize < 1024, EC keySize < 224, 3DES_EDE_CBC, anon, NULL, \     include jdk.disabled.namedCurves  

in windows

image in windows

windows path: Program Files\Java\jre1.8.0_301\lib\security\java.security

the purpose of disabling these lines is to avoid the following error message:

Error: javax.net.ssl.SSLHandshakeException: No appropriate protocol   (protocol is disabled or cipher suites are inappropriate)  

and the solution proposal published here (which is to comment on these lines)

I'm not sure if putting a comment at the beginning of the line (#) will disable them for both operating systems. Because this oracle document say is //

whichever one is used to comment out the lines, I also don't know if it is necessary to comment out all 3 lines, or just commenting out the first disables all 3. example:

this way:

# jdk.tls.disabledAlgorithms=SSLv3, TLSv1, TLSv1.1, RC4, DES, MD5withRSA, \     DH keySize < 1024, EC keySize < 224, 3DES_EDE_CBC, anon, NULL, \     include jdk.disabled.namedCurves  

or that way?

# jdk.tls.disabledAlgorithms=SSLv3, TLSv1, TLSv1.1, RC4, DES, MD5withRSA, \  #   DH keySize < 1024, EC keySize < 224, 3DES_EDE_CBC, anon, NULL, \  #   include jdk.disabled.namedCurves  

question:

how to comment out (disable) the previous lines in java.security file on windows and linux?

How to enable camera and mic permission in macOS app using Chrome embedded browser? [closed]

Posted: 10 Feb 2022 07:40 AM PST

I want to implement cef library in macOS application in order to achieve Native webbrowser in an app. I have downloaded cef-Simple project and tried to run. But on cef I have to use webRtc therefore I am not able to turn on camera and mic permission. When the cef launches, I am unable to give permission to the user to enable camera and mic.

How to setup permanent MTU size in RHEL 7 for eth0 interface?

Posted: 10 Feb 2022 07:30 AM PST

I am using RHEL 7 and trying to setup permanent MTU size to 8500. Not able to find a way to set it up permanently. The server does not have dhcp.conf file in it.

I am using below command for temporary solution. But the MTU size gets reset after server reboot. If anyone has luck achieving a permanent solution please suggest.

ifconfig eth0 mtu 8500 up  

Apache2 SSL only works when virtualhost is removed?

Posted: 10 Feb 2022 05:50 AM PST

I'm making a website hosted at sparrowthenerd.space, and I'm trying to have it use multiple subdomains so I can run NextCloud, OctoPrint, and a general webpage all from the same IP address. As I understand, this can be accomplished with VirtualHosts in Apache2. However, unless I remove the virtualhost tag from my conf file (below), I get an SSL Handshake Error with CloudFlare enabled, and an SSL protocol error without it.

I am using Apache2 v2.4.52 on Debian 11 Bullseye. The web server is self-hosted, and uses NodeJS on port 9999 by proxy (I think that's the right terminology?).

#<VirtualHost xxx:xx:xx:xxx:443>          ServerAdmin webmaster@localhost          ServerName sparrowthenerd.space          DocumentRoot /var/www/sparrowthenerd            ProxyPass /.well-known/ !          ProxyPass / http://localhost:9999/          ProxyPassReverse / http://localhost:9999/          ProxyPreserveHost On            SSLEngine on          SSLProtocol all -SSLv2          SSLCipherSuite HIGH:MEDIUM:!aNULL:!MD5          SSLCertificateFile /etc/apache2/ssl/sparrowthenerd.space.pem          SSLCertificateKeyFile /etc/apache2/ssl/sparrowthenerd.space.key              ErrorLog ${APACHE_LOG_DIR}/error.log          CustomLog ${APACHE_LOG_DIR}/access.log combined            <Directory /var/www>                  AllowOverride none                    Order Allow,Deny                  Allow from all          </Directory>  #</VirtualHost>  

When the virtualhost tags are uncommented, I get the error. When they are commented, I do not, but I also then can't add extra subdomains. I am using the CloudFlare proxy servers with a Cloudflare SSL Certificate. Please let me know if you need more information, I'm happy to provide it!

S3 Logs event Issue

Posted: 10 Feb 2022 05:47 AM PST

Is there a way to see what actions the 'g2' IAM user is performing in S3, and which IP(s) they are running from? I have already enabled the logging of S3 actions.

One point I'm still not able to figure out is that when I'm trying to find logs in Cloud trail using an AWS access key or username in both cases, I'm getting results as No matches. But throughout the day that user (g2) interacts with S3, based on the times it seems like it is a CRON running on some server. How to identify it?

I did analyze CloudTrail event history and used CloudWatch Logs Insights to find out access Logging IP address for 90 days by using both "username" and "AWS Access Key" but it seems that it isn't of much help for finding "g2" user data. "g2" IAM user does have Administrator Access. The user does not have console management access. I suspect it is just doing an 'ls' to check for the existence of some files. I think the same actions will be occurring each day for it

I know the date/time the user executes and the resource (S3) but that is all (no bucket, no IP, etc). Is there anything we can do with that information?

Is the CLI tool CloudTrail log will be helpful for my scenario? Can anyone help me with this?

Haproxy log file with pfsense

Posted: 10 Feb 2022 06:04 AM PST

I am trying to read the /var/log/haproxy.log file with the command: clog -f haproxy.log but nothing happens. No window opens. How do I see my error log? I'm on pfsense.

Thank you so much!

How to check IPv6 address via command line?

Posted: 10 Feb 2022 06:03 AM PST

How do I check the IPv6 address via command line? For IPv4 I simply use:

curl ipinfo.io/ip

This doesn't work for IPv6.

kubernetes and sharing an nfs volume accross multiple pods

Posted: 10 Feb 2022 08:01 AM PST

I'm trying to figure out how I can use a single nfs share with k8s persistent volume claims.

For example, let's say I have a single nfs pv configured:

apiVersion: v1  kind: PersistentVolume  metadata:    name: nfs-pv  spec:    capacity:      storage: 10Gi    accessModes:      - ReadWriteMany    persistentVolumeReclaimPolicy: Retain    storageClassName: nfs-storage    nfs:      path: /var/nfs_exports      server: 10.9.0.205      readOnly: false  

Is it possible to create multiple volume claims that map to subdirectories within this single share?

For example again, let's say I create the following volume claims:

apiVersion: v1  kind: PersistentVolumeClaim  metadata:    name: influx-data    namespace: kube-system  spec:    storageClassName: nfs-storage    accessModes:    - ReadWriteMany    resources:       requests:         storage: 5Gi  ---  

and:

apiVersion: v1  kind: PersistentVolumeClaim  metadata:    name: elasticsearch-data    namespace: kube-system  spec:    storageClassName: nfs-storage    accessModes:    - ReadWriteMany    resources:       requests:         storage: 2Gi  ---  

I guess, that both claims will be bound to the pv, but there is no way to seperate the data of both elasticsearch and influxdb.

I hope you understand what I'm trying to do here (sorry, I find it difficult to explain). I just want to use a single nfs share that can be used by multiple pods, while still keeping their data seperate.

Rewrite leads to infinite 301 redirect loop on existing directories

Posted: 10 Feb 2022 08:01 AM PST

I went through questions/solutions found here, tried numerous approaches (including the [L] directive) but nothing really did the trick.

Situation Overview

Debian running Apache 2.2 proxying through nginx

Goal

Redirect everything to /index.php and assure a trailing slash, always.

Exclude the following directories from the rule:

  • js_static
  • media

Exclude all .css files from the rule.

The Problem

Apache/nginx lead to a 301 redirect loop when i call www.url.com/js_static. (Problem occurs also with trailing slash – makes no difference)

Current Solution Approach

nginx is configured like this:

gzip_proxied any;  rewrite ^/(.*)/$ /$1;  ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:AES256+EECDH:AES256+EDH';  

Apache is configured this way:

RewriteEngine On  RewriteCond %{SCRIPT_FILENAME} !^.+\.(css)  RewriteCond %{REQUEST_URI} !^.+js_static  RewriteCond %{REQUEST_URI} !^.+media  RewriteRule ^(.*)$ /index.php/$1  AllowEncodedSlashes On  

I fail to see where the problem is. A theory i had was that the combination of nginx/apache rewrites would create the problem, so i fiddled around with the configuration, but to no avail, unfortunately.

Can someone pinpoint the issue here?

hping3 not returning tcp timestamp

Posted: 10 Feb 2022 07:00 AM PST

A recent pentest revealed that the TCP timestamp option was enabled. I have tried to reproduce the pentesters' result using

hping3 --tcp-timestamp -S -p 80 xx.xx.xx.xx  

but the tool never returns. It sits on the line:

HPING xx.xx.xx.xx (eth0 xx.xx.xx.xx): S set, 40 headers + 0 data bytes  

If I enter Ctrl C I get:

--- xx.xx.xx.xx hping statistic ---   3746 packets transmitted, 0 packets received, 100% packet loss   round-trip min/avg/max = 0.0/0.0/0.0 ms  

If I add the -c option with a value of, say, 4 it does return but without timestamp information.

I checked with our hosting provider who confirmed that the timestamp was enabled (and then disabled it).

Any ideas what might be wrong with my setup that could be causing this? I'm using Kali 2016.1 on a hyper-v hosted virtual server, tunneling out of our DMZ to a Digital Ocean hosted Debian server using sshuttle.

Task scheduler terminates completed task

Posted: 10 Feb 2022 07:00 AM PST

I am running a task using the Task Scheduler on a Windows Server 2012 R2 server. Today I was examining the task's history. My task completed successfully.

Task Scheduler successfully finished "{a17b1690-5381-4163-a7e5-ab01af11a18e}" instance of the "MyTask" task for user "MyUsername".  

However, I noticed that the event following that task's completion was in the "Task terminated" category.

Task Scheduler terminated "{a17b1690-5381-4163-a7e5-ab01af11a18e}"  instance of the "MyTask"  task.  

I have been trying to figure what caused this. I do have the following setting checked:

If the running task does not end when requested, force it to stop.  

Could this be why the Task Scheduler terminated the task? I thought the task was done?

What is the performance impact of disabling NCQ?

Posted: 10 Feb 2022 06:03 AM PST

Our cluster system runs currently under CentOS7 with SSDs and NCQ disabled. What kind of a performance drop is to be expected within an i/o-heavy usage scenario?

I'm not expecting a precise answer because I know it largely depends on the application, hardware, and network (just an idea would be great).

GPG Key not available for Local Apt Repo

Posted: 10 Feb 2022 06:04 AM PST

We have an Apt-mirror server. We have also a custom repo named 'local' on this server.

If I add in sources.list the following line :

deb http://aptmirror.example.com/local trusty main  

The following error is displayed :

W: Erreur de GPG : http://aptmirror.example.com trusty InRelease : The following signatures couldn't be verified because the public key is not available : NO_PUBKEY 2ED3267B70B1ADC4  

Even with the following command :

gpg --keyserver aptmirror.example.com --recv-keys 2ED3267B70B1ADC4  gpgkeys: key 2ED3267B70B1ADC4 can't be retrieved  gpg: no valid OpenPGP data found.  gpg: Total number processed: 0  

I tried also with apt-key adv but it's not working..

Do you know how to have this local GPG Public key available for all Linux clients ?


Attempts

gpg --send-keys --keyserver keyserver.ubuntu.com $GPGKEY  

or

gpg --send-keys --keyserver keys.gnupg.net $GPGKEY  

but I got :

gpgkeys: this keyserver type only supports key retrieval  gpg: keyserver internal error  gpg: keyserver send failed: keyserver error  

And Copy-Paste with gpg --export --armor is not really a solution, with 200 computers

EDIT : Thanks for your answers.

I tried this :

gpg --send-keys --keyserver keyserver.ubuntu.com $GPGKEY  

OR

gpg --send-keys --keyserver keys.gnupg.net $GPGKEY  

but I got :

gpgkeys: this keyserver type only supports key retrieval  gpg: keyserver internal error  gpg: keyserver send failed: keyserver error  

And Copy-Paste with gpg --export --armor is not really a solution, with 200 computers ...

How to save + close file when editing in bash?

Posted: 10 Feb 2022 06:23 AM PST

OK - I am linux newbie - I am trying to edit a file from bash via edit <filename> command in whatever the default mode is (I am assuming 'vi'?).

Problem is for the hell of me I cannot how to save and out of edit mode - this cheatsheet seems to suggest ESC should do the trick but it doesn't seem to work.

I am connecting via ssh from a mac to a linux suse enterprise 11 box.

Any help appreciated!

How do I clear Chrome's SSL cache?

Posted: 10 Feb 2022 06:37 AM PST

I have a HAProxy / stunnel server that handles SSL for our sites on AWS. During testing, I created a self-signed cert on this server and hit it from my desktop using Chrome to test that stunnel was working correctly.

Now I have installed the legitimate cert on that server. When I hit the site from my machine in Chrome it throws the following error:

Error 113 (net::ERR_SSL_VERSION_OR_CIPHER_MISMATCH): Unknown error.

My guess is that Chrome cached the key for the self-signed cert and it doesn't match that of the legitimate cert. This site works in all other browsers on my machine so it's just a Chrome problem.

One interesting note: When hitting the page from a incognito session (Ctrl+Shift+N), it works correctly. So it is clearly some sort of cache thing.

I did all the things I could think of (dumped my cache, deleted certs from the Personal and Other People page in the Manage Certificates dialog, Ctrl+F5, etc.).

My machine is Windows 7 x64. Chrome version: 12.0.742.91.

On the Google Chrome Help Form, there is a description of what sounds like the same issue; however, no resolution is found.


UPDATE: It seems to have "fixed itself" today. I hate problems like this. I still don't know what caused it or how it resolved itself. Presumably the cached cert expired or something, but I am still interested to know where this information is stored and how to verify it.

Best location to keep SSL certificates and private keys on Ubuntu servers?

Posted: 10 Feb 2022 08:14 AM PST

On Ubuntu, it looks like the best place for a private key used to sign a certificate (for use by nginx) is in /etc/ssl/private/

This answer adds that the certificate should go in /etc/ssl/certs/ but that seems like an unsafe place. Do .crt files need to be kept safe or are they considered public?

No comments:

Post a Comment