Saturday, July 31, 2021

Recent Questions - Server Fault

Recent Questions - Server Fault


How to delete specific files older than 1hour on cronjob?

Posted: 31 Jul 2021 10:24 PM PDT

I'm using cronjob for Ubuntu 20.04. I want to auto delete files older than 1hour and only files with filename begins with master-stdout.log*

How can I do this?

find /root/logs/ * -mmin +60 -exec rm {} \;  

Setting up SSL for custom port in nginx - letsencrypt

Posted: 31 Jul 2021 10:18 PM PDT

I'm trying to enable SSL on a custom port (not 443), running a webpage. From searching around, I couldn't find much info that helped.

The server has unchangable ports, external: 26143, Internal: 80.

To enter the server (without SSL) you would type example.com:26143, and the system would see this as a connection to port 80.

How would I set up a certificate (lets encrypt) to enable SSL on this port?


From testing, it seems like whatever I do, it only accesses the server on port 80, even if I set it to 26143

here is the nginx sites-enabled config:

server {      listen 80;      listen [::]:80;        root /root/html;        index index.php;      server_name _;        location / {          try_files $uri $uri/ =404;      }        location ~ \.php$ {          include snippets/fastcgi-php.conf;                # With php-fpm (or other unix sockets):          fastcgi_pass unix:/var/run/php/php7.4-fpm.sock;      }        location ~ /\.ht {          deny all;      }        location /.well-known {          root /var/www/ssl/example.com/;      }  }  

Commands I've tried are:

certbot --nginx -d example.com:26143  certbot certonly --standalone --preferred-challanges http -d example.com:26143  certbot certonly --standalone --preferred-challenges http -d example.com  certbot certonly --standalone --preferred-challenges http --http-01-port 26143 -d example.com  certbot certonly --nginx --preferred-challenges http --http-01-port 26143 -d example.com  certbot certonly --noninteractive --agree-tos --cert-name slickstack -d example.com -m my@mail.com --webroot -w /root/html  certbot certonly --noninteractive --agree-tos --cert-name slickstack -d example.com:26143 -m my@mail.com --webroot -w /root/html  certbot certonly --noninteractive --agree-tos --cert-name slickstack -d example.com --http-01-port 26143 -m my@mail.com --webroot -w /root/html  certbot certonly --noninteractive --agree-tos --cert-name slickstack -d example.com --preferred-challenges http --http-01-port 26143 -m my@mail.com --webroot -w /root/html  

Some tweaking back and fourth, most common error I got was this:

IMPORTANT NOTES:   - The following errors were reported by the server:       Domain: example.com     Type:   unauthorized     Detail: Invalid response from     https://example.com/.well-known/acme-challenge/ho73up1dR3KU4V37awccOw2T5xsSILWUM365ZnwVEN4     [159.81.xxx.xxx]: "<!DOCTYPE HTML PUBLIC \"-//IETF//DTD HTML     2.0//EN\">\n<html><head>\n<title>404 Not     Found</title>\n</head><body>\n<h1>Not Found</h1>\n<p"       To fix these errors, please make sure that your domain name was     entered correctly and the DNS A/AAAA record(s) for that domain     contain(s) the right IP address.  

The 404 is Not from my system, it's from example.com:80, instead of example.com:26143. Also, I do not have access to modifying the DNS records.


In my experience, lets encrypt and SSL has been kind of confusing, and together with the rate limits, I'm not able to troubleshoot enough to understand.

I know it should be possible, I just don't know how and/or what I'm doing wrong.

Any help would be appreciated

Using Podman containers with Ansible

Posted: 31 Jul 2021 08:42 PM PDT

I created an ansible role with podman to pull the nginx image and run the container which works but i would like to now copy a custom index html from the host to the container so that it overrides the default index html page

do we use volumes if so how do we use it for this scenario in the yml file?

Appreciate your help.

ASUS router notifying a local device on device connect/disconnect?

Posted: 31 Jul 2021 07:55 PM PDT

I have a Asus RT-AX88U router, which comes with the ability to work with Alexa and IFTTT for things like notification when it detects that a device is connected/disconnected. However, I don't like the idea of sending this kind of information over the internet, and rather prefers to have a local server that processes these triggers and notifies me with other means. Is there a way to hack the router/are there unofficial APIs that I can use to get the same functionality without IFTTT/Alexa?

Kubernetes V1.19.13 - kubeapi servers not able connecting to different etcd database

Posted: 31 Jul 2021 07:51 PM PDT

I have upgraded Kubernets cluster ( 3 master, 3 etcd servers database) from 1.18 to v1.19.13 and ETCD to 3.4.13. since than API servers are not stable, keep switching different etcd server, because of this cluster is not working properly. these cluster running on CentOS steam 8. This Cluster worked before the upgrade, after upgrade only I have seen this issue.

Any help to resolve this issue? Is there know issue with this version?

API server logs

I0731 00:54:39.498953       1 client.go:360] parsed scheme: "passthrough"  I0731 00:54:39.499025       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://0.0.0.02:2379  <nil> 0 <nil>}] <nil> <nil>}  I0731 00:54:39.499035       1 clientconn.go:948] ClientConn switching balancer to "pick_first"  I0731 00:54:40.241615       1 client.go:360] parsed scheme: "passthrough"  I0731 00:54:40.241681       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://0.0.0.01:2379  <nil> 0 <nil>}] <nil> <nil>}  I0731 00:54:40.241691       1 clientconn.go:948] ClientConn switching balancer to "pick_first"  I0731 00:54:45.348969       1 client.go:360] parsed scheme: "passthrough"  I0731 00:54:45.349030       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://0.0.0.03:2379  <nil> 0 <nil>}] <nil> <nil>}  I0731 00:54:45.349040       1 clientconn.go:948] ClientConn switching balancer to "pick_first"  I0731 00:55:16.460379       1 client.go:360] parsed scheme: "passthrough"  I0731 00:55:16.460428       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://0.0.0.01:2379  <nil> 0 <nil>}] <nil> <nil>}  I0731 00:55:16.460439       1 clientconn.go:948] ClientConn switching balancer to "pick_first"  I0731 00:55:17.461906       1 client.go:360] parsed scheme: "passthrough"  

etcd looks healthy

# /opt/bin/etcdctl.sh   version  etcdctl version: 3.4.13  API version: 3.4     /opt/bin/etcdctl.sh  endpoint health  127.0.0.1:2379 is healthy: successfully committed proposal: took = 9.739533ms        # /opt/bin/etcdctl.sh  check perf   60 / 60 Boooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo! 100.00% 1m0s  PASS: Throughput is 150 writes/s  PASS: Slowest request took 0.042491s  PASS: Stddev is 0.001743s  PASS    # /opt/bin/etcdctl.sh  endpoint status --cluster -w table  +----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+  |          ENDPOINT          |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |  +----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+  | https://0.0.0.02:2379 | 15cd65a732ebd5d8 |  3.4.13 |   26 MB |     false |      false |      9305 |   19813854 |           19813854 |        |  | https://0.0.0.03:2379 | add66a254676e690 |  3.4.13 |   26 MB |      true |      false |      9305 |   19813854 |           19813854 |        |  | https://0.0.0.01:2379 | e2811ed02ce71623 |  3.4.13 |   26 MB |     false |      false |      9305 |   19813854 |           19813854 |        |  +----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+  

If you're seeing this Grafana has failed to load its application files

Posted: 31 Jul 2021 07:51 PM PDT

I want to run grafana behind nginx. I followed this instruction. The only problem I have right now is whenever I try to edit a panel, I will see this error message below. It disappears in a few seconds. The grafana.ini looks like

domain=localhost  root_url = %(protocol)s://%(domain)s:%(http_port)s/grafana/  serve_from_sub_path = true  

The scrubbed nginx.conf looks like

server_name 1.2.3.4 # my public id address  location /grafana/ {      proxy_pass http://localhost:3000/;  }  

I don't really experience any issue other than the annoying error message every time I want to edit a panel. When I used SSH to bind port 3000 to my computer ssh -L 3000:localhost:3000 my_id@1.2.3.4, everything worked just fine. Any suggestions on how to fix the problem?

Grafana error message

How to delete specific files?

Posted: 31 Jul 2021 07:48 PM PDT

I'm running Ubuntu 20.04. I have a directory with million of files named like this

master-stdout.log.20210801.024908  master-stdout.log.20210801.025524  master-stdout.log.20210801.064355  

How can I delete all of master-stdout.log files?

After adding a new DNSBL to Sendmail, how can I resubmit an email to see if it will be rejected?

Posted: 31 Jul 2021 04:56 PM PDT

TL;DR

How can I fool my own sendmail into thinking an email comes from a particular ip address, so that it rejects the message due to a DNSBL match?

Details:

I run my own mail server, and most spam is blocked by the DNS black lists (DNSBL) that I've added to /etc/mail/sendmail.mc like so:

dnl FEATURE(`dnsbl',`dnsbl.sorbs.net',`"554 Rejected " $&{client_addr} " found in dnsbl.sorbs.net"')dnl  dnl FEATURE(`dnsbl',`b.barracudacentral.org',`"554 Rejected " $&{client_addr} " found in b.barracudacentral.org"')dnl  

Today some spam came in (passing all tests), and after checking MX Toolbox and DNSBL Information could see that adding one of several DNSBLs would have blocked this particular spam.

So, I added another DNSBL, and now I want to test it by resubmitting this email to Sendmail, but therein lies the problem: it won't be coming from the right ip address, and the DNSBL won't see it as bad.

Here's the command I normally would have used:

formail -s /usr/sbin/sendmail -oi -t < testmail.mbox  

Before I try to reinvent a wheel, I thought I'd ask here first. Possible ideas:

  • Is there a CLI option to sendmail for faking the source ip?
  • Maybe craft a queued message file and put it in the queue directly?
  • Perhaps set up another ip address on my machine, and send to myself with it?
  • Would an OpenVPN or SSH tunnel be a quick fix?
  • Possibly a shared library could be loaded to intercept system calls, à la LibFakeTime?
  • Dtrace looks powerful, can it alter getsockopt(2) calls like this?

Thanks!

How can I let apache server work always on my ec2 instance?

Posted: 31 Jul 2021 04:52 PM PDT

I am learning aws's EC2 server. I configured apache and php. I started the apache server with the command

Sudo service httpd start

But every time I stop my pc, or the next day, when I want to continue the course. I have to start again the apache server. I mean it is not on started status always.

Imagine if I have a website running on that instance, it means that users won't be able to go on to my website. Or If want my website to be available every time, I don't have to logout from my aws account. Can you explain me what is the problem.

I am using the right now the 12 months free offer on aws. It's an Amazon Linux 2 with a Linux 4.14 version

Packets from xfrm interface won't route, but opposite works

Posted: 31 Jul 2021 07:05 PM PDT

I'm working on a site-to-site vpn, where one end us a UDM and the other is Strongswan. The goal is to provide bi-directional routing into a cloud environment. I'm completely baffled why this isn't working.

The good news is Strongswan connects and will pass traffic. But I have some routing issues on the Strongswan side. My Strongswan host has two interfaces, eth0 which has the public internet IP on eth0, and an internal ip of 10.132.169.74 on eth1

  • Lan network[s]: 10.87.0.0/24, 10.87.35.0/24, 10.87.235.0/24
  • Cloud network: 10.132.0.0/16
  • 10.87.0.1 = UDM
  • 10.132.169.74 = Strongswan eth1 and connects to the internal cloud network 10.132.0.0/16
  • 10.87.0.33 = test host on the LAN network
  • 10.132.40.82 = test host on the cloud network

current situation:

  • pinging from 10.87.0.33 (Lan test host) -> 10.132.169.74 (Strongswan) works
  • pinging from 10.132.169.74 (Strongswan) -> 10.87.0.33 (Lan test host) works
  • pinging from 10.132.40.82 (cloud test host) -> 10.87.0.33 (Lan test host) works
  • pinging from 10.87.0.33 (Lan test host) -> 10.132.40.82 (cloud test host) Does not work, which is the most important thing outta all of this

Here's the routing table of the Strongswan host 10.132.169.74:

default via x.x.x.x dev eth0 proto static   10.17.0.0/16 dev eth0 proto kernel scope link src 10.17.0.21   10.19.49.0/24 dev wg0 proto kernel scope link src 10.19.49.1   10.87.0.0/16 dev ipsec0 scope link src 10.132.169.74   10.132.0.0/16 dev eth1 proto kernel scope link src 10.132.169.74   x.x.x.y/20 dev eth0 proto kernel scope link src x.x.x.z  

Here's the routing table on the cloud test host (10.132.40.82):

default via x.x.x.x dev eth0 proto static   10.17.0.0/16 dev eth0 proto kernel scope link src 10.17.0.24   10.87.0.0/16 via 10.132.169.74 dev eth1   10.132.0.0/16 dev eth1 proto kernel scope link src 10.132.40.82   x.x.x.y/20 dev eth0 proto kernel scope link src x.x.x.z   

On the Strongswan host, I'm executing this:

sudo ip link add ipsec0 type xfrm dev eth0 if_id 4242  sudo ip link set ipsec0 up  sudo ip route add 10.87.0.0/16 dev ipsec0 src 10.132.169.74  

And finally here's my swan config:

sudo tee /etc/strongswan.d/charon-systemd.conf  << "EOF"  charon-systemd {    load=pem pkcs1 x509 revocation constraints pubkey openssl random random nonce aes sha1 sha2 hmac pem pkcs1 x509 revocation curve25519 gmp curl kernel-netlink socket-default updown vici    journal {      default=0      # enc=1      # asn=1    }  }  EOF    sudo tee /etc/swanctl/conf.d/xyz.conf << "EOF"  connections {    vpn-cloud-udm-lan {      version=2      proposals=aes128gcm16-sha256-modp2048,aes128-sha256-modp2048      unique=replace      encap=yes      local {        id=x.x.x.x        auth=psk      }      remote {        auth=psk      }      children {        net {          local_ts=10.132.0.0/16          remote_ts=10.87.0.0/16          esp_proposals=aes128gcm16-sha256-modp2048,aes128-sha256-modp2048          start_action=trap          if_id_in=4242          if_id_out=4242        }      }    }  }  secrets {    ike-1 {      id-vpn-cloud=x.x.x.x      secret="somesecret"    }    ike-2 {      id-udm-lan=y.y.y.y      secret="somesecret"    }  }  EOF  

and my sysctl on the Strongswan host:

net.ipv4.ip_forward=1  net.ipv4.conf.all.forwarding=1  net.ipv4.conf.all.send_redirects=0  net.ipv4.conf.default.send_redirects=0  

sudo swanctl --list-sas shows active tunnels and when I ping I can see the counters go up. Furthermore, a tcpdump listening on the cloud test host shows no traffic arriving, but a tcpdump on the Strongswan host in the particular scenario DOES show the traffic, so it's be dropped there.

Any help is appreciated, thank you!

Rebooted Ubuntu server, nginx site no longer accessible from browser

Posted: 31 Jul 2021 04:03 PM PDT

I rebooted my Ubuntu server this morning because I was having what appeared to be a low-memory error (happens occasionally, hasn't been enough of a problem to try and fix it). But now, my site (which was previously working fine) is no longer accessible from the browser.

The setup: I'm running a NuxtJS site using pm2 to daemonize it, and nginx as a reverse proxy. I have a post-receive git hook so that I can push to my remote git repo, which then rebuilds the app and restarts the pm2 instance.

I can only access my site from inside the server, inside a terminal window. Lynx, wget, and cURL all work, and even follow the 301 redirect to HTTPS. And they're working when I request the domain itself, not just the localhost:3000 that's getting reverse proxied. As in, curl https://my-domain.org works. If I try to curl/lynx/etc from any other terminal window, it just waits until it times out. Same with the browser – waits until it times out.

Here are the things I've tried/looked at:

  • I'm using UFW, so I checked to see if the firewall was the problem. But 80, 443, and 8080 are all set to ALLOW.
  • I tried seeing if maybe nginx wasn't listening somehow, so I tried sudo lsof -i -P -n | grep LISTEN. Here's the output of that:
nginx     2896     root    6u  IPv4 668673557      0t0  TCP *:443 (LISTEN)  nginx     2896     root    7u  IPv4 668673558      0t0  TCP *:80 (LISTEN)  nginx     2897 www-data    6u  IPv4 668673557      0t0  TCP *:443 (LISTEN)  nginx     2897 www-data    7u  IPv4 668673558      0t0  TCP *:80 (LISTEN)  nginx     2898 www-data    6u  IPv4 668673557      0t0  TCP *:443 (LISTEN)  nginx     2898 www-data    7u  IPv4 668673558      0t0  TCP *:80 (LISTEN)  
  • I tried checking nginx's access.log. All my curl/wget/Lynx requests are showing up as normal, but none of the browser requests are appearing. I also took a look at the error.log, and got this:
2021/07/31 11:51:52 [emerg] 885#885: bind() to 0.0.0.0:443 failed (98: Address already in use)  2021/07/31 11:51:52 [emerg] 885#885: bind() to 0.0.0.0:80 failed (98: Address already in use)  2021/07/31 11:51:52 [emerg] 885#885: bind() to 0.0.0.0:443 failed (98: Address already in use)  2021/07/31 11:51:52 [emerg] 885#885: bind() to 0.0.0.0:80 failed (98: Address already in use)  2021/07/31 11:51:52 [emerg] 885#885: still could not bind()  

Thus far, I haven't found any solutions. I'm just baffled, because whatever changed, it changed because of a reboot. Any ideas are much appreciated.

EDIT to add some output:

sudo systemctl status nginx:

● nginx.service - A high performance web server and a reverse proxy server     Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)     Active: active (running) since Sat 2021-07-31 15:05:53 EDT; 27min ago    Process: 6834 ExecStop=/sbin/start-stop-daemon --quiet --stop --retry QUIT/5 --pidfile /run/nginx.pid (code=exited, status    Process: 6840 ExecStart=/usr/sbin/nginx -g daemon on; master_process on; (code=exited, status=0/SUCCESS)    Process: 6837 ExecStartPre=/usr/sbin/nginx -t -q -g daemon on; master_process on; (code=exited, status=0/SUCCESS)   Main PID: 6841 (nginx)     CGroup: /system.slice/nginx.service             ├─6841 nginx: master process /usr/sbin/nginx -g daemon on; master_process on             ├─6842 nginx: worker process                                        └─6843 nginx: worker process                               Jul 31 15:05:53 parrot systemd[1]: Starting A high performance web server and a reverse proxy server...  Jul 31 15:05:53 parrot systemd[1]: Started A high performance web server and a reverse proxy server.  

Output of sudo nginx -T is long, so I made it a gist.

Any way to join a server to Active Directory Domain by IP address in hostfile and not DNS?

Posted: 31 Jul 2021 08:01 PM PDT

To my knowledge their is no way to join a server to AD without the server being able to resolve the AD domain via DNS. Joining requires being able to get multiple records from DNS - including SRV records. So a simple host file entry shouldn't work.

With that in mind, my question is am I correct is there no other way to join a server to AD without access to a DNS server that hosts the AD records?

The reason I ask is:

I have some servers that are in AWS that need to join to an AD domain inside a corporate network. We have VPN tunnel from AWS back to the corporate network. This domain is not advertised on a public DNS server we can reach from AWS. We do have an internal corporate DNS server with the appropriate records. Now with some networking changes on the corporate side we could reach this DNS through our VPN tunnel; however, in AWS we use the AWS DNS service with a delegated zone to resolve server to server communication within AWS and it then reaches out to our corporate public DNS server for anything it can't resolve. We also use the AWS DNS server for health checks on AWS to trigger region failovers.

If we were to point our AWS servers to our internal corporate DNS through the VPN tunnel, we would then no longer be able to resolve internally within AWS.

I only see a couple of options.

  1. Find a way to join a server to AD without using DNS, which I don't think is possible for reasons I stated previously. But if anyone knows differently, please say so.

  2. Expose the AD DNS records on our external (public) DNS.

  3. Redesign our whole DNS design of cloud and corporate environments. This option will take time and maybe it will be the long term solution. But I also need a short term solution in the meantime. Options 1 & 2 are the only short term solutions I can think of and if 1 isn't possible like I think then that leaves me with only option 2.

So do you agree option 1 isn't possible and/or do you have any other ideas that I haven't already listed.

Thanks in advance

Tomcat 9 Service on Centos 7 won't start up at system boot

Posted: 31 Jul 2021 10:03 PM PDT

I'm not very experienced with Linux generally, so please forgive me if this is obvious to you.

I have done a large number of searches on various combinations of keywords, but can't find a solution to this problem.

I have installed command line only (core) CentOS 7 on a virtual machine.

I have installed java and downloaded tomcat 9.0.14 (I know that's not the latest version). I have set up tomcat to run as a service, using systemd, that is, I have created a file:

/etc/systemd/system/tomcat.service

I had to revise this post about 15 times before the submission form stopped thinking this was spam, so I had to remove a lot of information I originally wanted to include like what's in my service.tomcat file. Sorry. I'd love to include more detail, but the form just won't let me.

It is the only instance of Tomcat on the Linux server. It is installed in /opt/tomcat/apache-tomcat-9.0.14, but I have created a symbolic link named "latest" so it can be referenced as /opt/tomcat/latest.

Tomcat starts properly when I run the startup script manually, i.e:

cd /opt/tomcat/latest/bin sudo ./startup.sh

When I do so, it properly responds and I can see the landing page using a browser on another computer.

I can also start up tomcat as a service - manually starting that service using:

sudo systemctl start tomcat

If I do so, then I can see tomcat working and using:

sudo systemctl status tomcat

says that it's working. I can also visit the landing page when tomcat is started this way. So it will start as a service, if I manually start the service with the above command.

The problem is that when I boot up the machine, tomcat doesn't start. I don't think that I've simply failed to reference the service, rather, I believe it might be failing (but I'm not 100% certain):

If I reboot the computer, tomcat doesn't start. If I then go:

sudo systemctl status tomcat -l

I get this:

tomcat.service - Apache Tomcat Web Application Container  Loaded: loaded (/etc/systemd/system/tomcat.service; enabled; vendor preset: disabled)  Active: failed (Result: exit-code) since Wed 2020-05-27 14:47:27 AEST; 3min 14s ago  Process: 1147 ExecStart=/opt/tomcat/latest/bin/startup.sh (code=exited, status=0/SUCCESS)  Main PID: 1168 (code=exited, status=1/FAILURE)    May 27 14:47:26 CentOS-7-NoGUI-Virgin systemd[1]: Starting Apache Tomcat Web Application Container...  May 27 14:47:27 CentOS-7-NoGUI-Virgin systemd[1]: Started Apache Tomcat Web Application Container.  May 27 14:47:27 CentOS-7-NoGUI-Virgin systemd[1]: tomcat.service: main process exited, code=exited, status=1/FAILURE  May 27 14:47:27 CentOS-7-NoGUI-Virgin systemd[1]: Unit tomcat.service entered failed state.  May 27 14:47:27 CentOS-7-NoGUI-Virgin systemd[1]: tomcat.service failed.  

Based on the fact that it says tomcat failed, I don't think that the problem is that I've not told CentOS to start the service. Rather, I think it's trying to start the service, but it's not working.

If I then manually do:

sudo systemctl start tomcat

I get no text returned, but subsequently if I do this:

sudo systemctl status tomcat -l

it shows tomcat is running. I'd love to include the exact output of the command, but I had to remove that too in my many revisions of this to get the submission form not to think this was spam.

I believe I have properly set up the tomcat service to start at system boot time, by doing this:

sudo systemctl daemon-reload

sudo systemctl enable tomcat

I tried using journalctl -xe to see if I could learn why the service was failing to start at boot time, but I couldn't find anything in the results of that command that explained why this was occuring. I'm happy to provide the (very long) output from that, if that's helpful.

The tomcat.service file contains the following: (I had to remove the contents of this file, even though it was marked as code, because the submission form insisted my post looked like spam. Sigh)

I have set the following in my user's home folder, in the .bashrc file:

export CATALINA_HOME=/opt/tomcat/latest

I wonder if perhaps when the service is starting up at boot time, if it's not running as my user, then perhaps it dosn't have access to this variable somehow ?

The actual startup script (referenced in the tomcat.service file is /opt/tomcat/latest/bin/startup.sh . It contains the default contents, I have not modified it in any way.

Again, the above script runs and will start tomcat. I can even start tomcat as a service by manually typing sudo systemctl start tomcat.

It just won't start at boot time.

I have done a sudo yum check-update, followed by sudo yum install update. I did this to update CentOS 7 to the latest version. I did this after I installed tomcat, as part of the troubleshooting process I have been through. This didn't seem to help.

I would be most grateful if anyone can suggest a solution, or a troubleshooting step I should try next. For example, I'm not sure how to examine the startup process on a linux box specifically to look for services failing to start up and why.

Kind regards, Spencer.

How can I install the ngx_http_geoip2_module module ? on Centos

Posted: 31 Jul 2021 04:08 PM PDT

I installed the GeoIP package using yum. I got the geoIP files in the /usr/share/GeoIP/ folder. I need to add some rules on some countries in the: /etc/nginx/nginx.conf and to do that i need to load the module, like: load_module modules/ngx_http_geoip2_module.so;, to recognize the variables. See:

geoip_country /usr/share/GeoIP/GeoIP.dat;

So how can i install this module? I followed this tutorial: https://github.com/leev/ngx_http_geoip2_module/blob/master/README.md#installing and the url is outdated or invalid, not sure, but i cannot download it . Also I already have nginx being installed. Any suggestions? thnx in advance!

Unable to NAT TFTP traffic because iptables is not forwarding the return connection to the client despite TFTP helper creating an expectation

Posted: 31 Jul 2021 09:04 PM PDT

The Problem

I have a TFTP server (Machine 'S') and a TFTP client (Machine 'C') on different subnets. They are connected via a router( Machine 'R'). All 3 machines are Debian 9/Stretch. The router is running iptables and is set to masquerade connections from the client's network to the server's network. I have configured iptables to use the Netfilter TFTP helper for tftp connections going to the TFTP server.

The trouble I'm having is that the TFTP helper sets up an expectation for the return tftp connection (as expected) but, despite this, only traffic from port 69 on the TFTP server is getting translated and sent back to the client. So only the regular MASQUERADE connection tracking is being used even though the conntrack table shows the expected return connection. According to RFC1350, the server is supposed to choose a random source port for its communication and direct it to the port that the client used as a source port originally (whew...).

The result is the that the router NATs the connection from the client to the server, sets up a translation rule for the return connection and happily waits for a return packet from the server with source port=69 that never arrives.

The Setup

Addresses are made up for clarity:

  • TFTP Server(S): 1.1.1.1

  • TFTP Client(C): 2.2.2.1

  • Router(R): 1.1.1.2 / 2.2.2.2

Iptables on the router has the following rules. All tables have default ACCEPT policy:

======== RAW Table ========  Chain PREROUTING (policy ACCEPT 464K packets, 432M bytes)   pkts bytes target     prot opt in     out     source       destination     59  2504 CT         udp  --  *      *       0.0.0.0/0    0.0.0.0/0       udp dpt:69 CT helper tftp    Chain OUTPUT (policy ACCEPT 280K packets, 36M bytes)   pkts bytes target     prot opt in     out     source       destination    ======== NAT Table ========  Chain POSTROUTING (policy ACCEPT 398 packets, 40794 bytes)   pkts bytes target     prot opt in     out     source       destination   5678  349K MASQUERADE  all  --  *     enp1s0  0.0.0.0/0    0.0.0.0/0  

Once the TFTP client is trying to connect, conntrack -L shows the following:

udp      17 28 src=2.2.2.1 dst=1.1.1.1 sport=45084 dport=69 [UNREPLIED] src=1.1.1.1 dst=1.1.1.2 sport=69 dport=45084 mark=0 helper=tftp use=1  

conntrack -L EXPECT:

298 proto=17 src=1.1.1.1 dst=1.1.1.2 sport=0 dport=45084 mask-src=255.255.255.255 mask-dst=255.255.255.255 sport=0 dport=65535 master-src=2.2.2.1 master-dst=1.1.1.1 sport=45084 dport=69 class=0 helper=tftp  

As you can see, the TFTP helper rule is working properly and is triggered once the client attempts its connection. As you can also see, the expectation created in the EXPECT table has source port 0, which I assume means "any port". But, as you'll see, the connection is only routed back to the client if the source port from the server is port 69 (regular old NAT)! Why is this? This is not the correct behaviour as far as I can tell.

I won't clutter this post anymore if I can avoid it, but what's shown by tcpdump udp and host 1.1.1.1 confirms exactly what iptables and conntrack are showing me.

I did this same setup on several Debian 8/Jessie setups about a year ago and the TFTP helper worked as expected and I never had any issues. Can anyone hlep me figure out if I'm doing something wrong? Is the issue with the TFTP helper? Why would its behaviour have changed from Debian 8/Jessie?

Migrating Jenkins jobs from one server to another

Posted: 31 Jul 2021 04:08 PM PDT

Copied Jenkins "jobs" directory from one A (VB) to server B (AWS). The jobs directory shows up in the server B with all the files in it. But those jobs doesn't populate in Jenkins UI. Please help.

Thank you!

SSH Suddenly returning Invalid format

Posted: 31 Jul 2021 05:07 PM PDT

So a while ago I set up a server on AWS, and used their generated SSH key. I saved the key to Lastpass, and have successfully retrieved it from there before, and got it working. However, after trying that again today, I can't get it to work.

-rw------- 1 itsgreg users 1674 Jun 6 12:51 key_name

I've tried ssh -i key_name, ssh-keygen -f key_name, but nothing works, I always get this error message:

Load key "key_name": invalid format

Is there any way to fix this?

Adding Tag (i.e. Source IP) to rsyslog for sending to rsyslog remote server

Posted: 31 Jul 2021 07:01 PM PDT

Is there any way to adding a Tag to Logs which sent by rsyslog? I send these logs to another server, and I can detect source IP as destination, but I need to adding tag in source.

Nginx configuration for Deluge

Posted: 31 Jul 2021 08:01 PM PDT

I have Nginx running on a CentOS server where i installed Deluge and configured a server block for him. In my browser, mydomain.com redirects to Deluge webUI but www.mydomain.com redirects to a web page of the hoster. In my dns, i have an entry for "www" and "mydomain" to the server ip.

Here's the Deluge server block in /etc/nginx/conf.d/vhosts.conf :

server {          listen 80;          server_name mydomain.com www.mydomain.com;            location / {                  proxy_pass http://www.localhost:8112;                  proxy_set_header X-Deluge-Base   "/";          }  }  

Have you any idea ? :)

How to show a 404 instead of a 403 on apache host

Posted: 31 Jul 2021 05:09 PM PDT

My site was hacked about 2 months ago so I closed the site down but there are over 1,000 spam links out there still directing to the domain. As there are no files on the domain visitors, including Google, receive a 403 error, so assume the page exists. How can I change the 403 to a 404? I have a 404.html file and have tried all the different rewrite, error document variations for the htaccess file I've been able to find on this site and others and nothing seems to work. For example:

Options -Indexes  RewriteCond %{HTTP_HOST} ^(www\.)?fineartdecor.com [NC]  RewriteRule ^(.*)/$ - [R=404,NC]    ------------------------------    ErrorDocument 404 /index.html?status=404    -------------------------------      RewriteEngine on    RewriteCond %{REQUEST_FILENAME} !-f    RewriteCond %{REQUEST_FILENAME} !-d    RewriteRule ^(.*)$ / [L,QSA]    --------------------------------  

Further suggestions would be gratefully received.

dmidecode weird total/data width

Posted: 31 Jul 2021 07:01 PM PDT

I am getting strange outputs from my workstation, which has ECC RAM.

Supposedly, from what I read, the data width should be at 64 bits and the total width at 72. But... data width shows as 64 and total width as 128.

Is this a problem with my configuration?

For reference, my motherboard is a MSI C236A WORKSTATION.

Handle 0x0042, DMI type 17, 40 bytes  Memory Device      Array Handle: 0x0041      Error Information Handle: Not Provided      Total Width: 128 bits      Data Width: 64 bits      Size: 8192 MB      Form Factor: DIMM      Set: None      Locator: ChannelA-DIMM0      Bank Locator: BANK 0      Type: DDR4      Type Detail: Synchronous      Speed: 2133 MHz      Manufacturer: Micron      Serial Number: 18221400      Asset Tag: 9876543210      Part Number: 18ASF1G72AZ-2G1B1         Rank: 2      Configured Clock Speed: 2133 MHz      Minimum Voltage: Unknown      Maximum Voltage: Unknown      Configured Voltage: 1.2 V  

Thanks,

Eduardo

nginx 405's with try_files for a DELETE request instead of proxying

Posted: 31 Jul 2021 05:09 PM PDT

I have nginx proxying to php-fpm with the following config:

location / {    try_files $uri $uri/ /index.php?$args;  }  location ~ \.php$ {    fastcgi_pass   127.0.0.1:9000;    fastcgi_index  index.php;    fastcgi_param  SCRIPT_FILENAME /vol/app/www/$fastcgi_script_name;    include        fastcgi_params;  }  

```

Everything is working great until a DELETE request comes in like:

DELETE /?file&path=foo

When this happens nginx returns a 405 (method not allowed) and doesn't appear to proxy the request to php-fpm. What's the best way to get DELETE/PUT requests to proxy? Is there way to bypass try_files for this type of request?

When hitting this URL, I see nothing in the error.log but access.log shows:

68.50.105.169 - - [20/Mar/2016:17:48:57 +0000] "DELETE /?file=client_img1.png&fileupload=e35485990e HTTP/1.1" 405 574 "http://ec2-foo.compute.amazonaws.com/jobs/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/48.0.2564.116 Safari/537.36" "-"

I've confirmed that I'm not hitting the proxy. My assumption is that nginx is blocking DELETE on the first "try" of try_files

Redirect IP to a domain name using htaccess

Posted: 31 Jul 2021 03:03 PM PDT

Let's say I have this IP Address 11.12.13.14 and the domain example.com. Now what I want is to redirect the user from IP Address to domain name (but without changing the domain name to address bar). So when the user requests 11.12.13.14/test it should open exapmle.com/test but not to redirect to domain name, in the address bar it should still remain 11.12.13.14/test.

I have seen this question Redirect to other domain but keep typed domain. I don't know if it works because I haven't tested it, but I suppose it does.

I am using Ubuntu 14.04 with Apache, so is there any wat to achieve this?

Here is what I have tried

Options +FollowSymLinks -MultiViews  RewriteEngine On  RewriteBase /    RewriteCond %{HTTP_HOST} ^ 11.12.13.14$ [NC]  RewriteRule ^ http://www.example.com%{REQUEST_URI} [L,NE,P]  

"A new version of /boot/grub/menu.lst is available" when upgrading Ubuntu on an AWS server

Posted: 31 Jul 2021 04:25 PM PDT

I just tried to do a sudo do_release_upgrade on an AWS EC2 Ubuntu 13.10 server to upgrade to 14.04. All was going well until I got the following message:

A new version of /boot/grub/menu.lst is available, but the version installed   currently has been locally modified.      What would you like to do about menu.lst?              * install the package maintainer's version     * keep the local version currently installed     * show the differences between the versions     * show a side-by-side difference between the versions     * show a 3-way difference between available versions     * do a 3-way merge between available versions (experimental)     * start a new shell to examine the situation      <Ok>  

I certainly haven't modified menu.lst, so I assume the local modifications are Amazon's doing. I'm going to hit the "keep the local version currently installed" option and hope for the best.

But why am I getting this message, and is this the correct way to handle it?

Raid 1 can not sync with new Drive . its stopping at 30 %

Posted: 31 Jul 2021 03:03 PM PDT

i had trying to add new HDD in place of Falty HDD. but new HDD can not sync with old one .sync process shown up to 30 % after that its stopped .

cat /proc/mdstat  Personalities : [raid1]     md2 : active raid1 sda3[0] sdb3[2](S)        1458319504 blocks super 1.0 [2/1] [U_]    md1 : active raid1 sda2[3] sdb2[2]        524276 blocks super 1.0 [2/2] [UU]    md0 : active raid1 sda1[0] sdb1[2]        6291444 blocks super 1.0 [2/2] [UU]  

md0 and md1 sync successfully , but md2 can not

this is detail

mdadm --detail /dev/md2  /dev/md2:          Version : 1.0    Creation Time : Fri May 24 11:22:21 2013       Raid Level : raid1       Array Size : 1458319504 (1390.76 GiB 1493.32 GB)    Used Dev Size : 1458319504 (1390.76 GiB 1493.32 GB)     Raid Devices : 2    Total Devices : 2      Persistence : Superblock is persistent        Update Time : Mon Aug  4 22:08:23 2014            State : clean, degraded    Active Devices : 1  Working Devices : 2   Failed Devices : 0    Spare Devices : 1               Name : rescue:2  (local to host rescue)             UUID : 96b46a6c:f520938c:f94879df:27851e8a           Events : 616        Number   Major   Minor   RaidDevice State         0       8        3        0      active sync   /dev/sda3         1       0        0        1      removed           2       8       19        -      spare   /dev/sdb3  

is that any solution . i want to backup my data

Bitnami redmine error SVN

Posted: 31 Jul 2021 09:04 PM PDT

I'm installing the Bitnami Redmine stack (redmine + subversion). Firstly I install configure and test it locally (Ubuntu 14.04 LTS). And everything is OK.

I install Bitnami stack on server (Red Hat 4.4.7-4) and configure SVN. I commit files into SVN and connect project into Redmine with SVN repository, but when I try see it Rredmine displays 404 error. In the Redmine log file I see the following errors:

Started GET "/redmine/projects/web-user-panel/repository" for 127.0.0.1 at 2014-04-24 11:34:20 +0300  Processing by RepositoriesController#show as HTML    Parameters: {"id"=>"web-user-panel"}    Current user: user (id=13)  Error parsing svn output: #<REXML::ParseException: No close tag for /lists/list>  /var/www/html/redmine/ruby/lib/ruby/1.9.1/rexml/parsers/treeparser.rb:28:in `parse'  /var/www/html/redmine/ruby/lib/ruby/1.9.1/rexml/document.rb:245:in `build'  /var/www/html/redmine/ruby/lib/ruby/1.9.1/rexml/document.rb:43:in `initialize'  /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/activesupport-3.2.17/lib/active_support/xml_mini/rexml.rb:30:in `new'  /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/activesupport-3.2.17/lib/active_support/xml_mini/rexml.rb:30:in `parse'  /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/activesupport-3.2.17/lib/active_support/xml_mini.rb:80:in `parse'  /var/www/html/redmine/apps/redmine/htdocs/lib/redmine/scm/adapters/abstract_adapter.rb:313:in `parse_xml'  /var/www/html/redmine/apps/redmine/htdocs/lib/redmine/scm/adapters/subversion_adapter.rb:106:in `block in entries'  /var/www/html/redmine/apps/redmine/htdocs/lib/redmine/scm/adapters/abstract_adapter.rb:258:in `call'  /var/www/html/redmine/apps/redmine/htdocs/lib/redmine/scm/adapters/abstract_adapter.rb:258:in `block in shellout'  /var/www/html/redmine/apps/redmine/htdocs/lib/redmine/scm/adapters/abstract_adapter.rb:255:in `popen'  /var/www/html/redmine/apps/redmine/htdocs/lib/redmine/scm/adapters/abstract_adapter.rb:255:in `shellout'  /var/www/html/redmine/apps/redmine/htdocs/lib/redmine/scm/adapters/abstract_adapter.rb:212:in `shellout'  /var/www/html/redmine/apps/redmine/htdocs/lib/redmine/scm/adapters/subversion_adapter.rb:100:in `entries'  /var/www/html/redmine/apps/redmine/htdocs/app/models/repository.rb:198:in `scm_entries'  /var/www/html/redmine/apps/redmine/htdocs/app/models/repository.rb:203:in `entries'  /var/www/html/redmine/apps/redmine/htdocs/app/controllers/repositories_controller.rb:116:in `show'  /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/action_controller/metal/implicit_render.rb:4:in `send_action'  /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/abstract_controller/base.rb:167:in `process_action'  /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/action_controller/metal/rendering.rb:10:in `process_action'  /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/abstract_controller/callbacks.rb:18:in `block in process_action'  /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/activesupport-3.2.17/lib/active_support/callbacks.rb:491:in `_run__2883861927089110970__process_action__2542827355008294621__callbacks'  /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/activesupport-3.2.17/lib/active_support/callbacks.rb:405:in `__run_callback'  /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/activesupport-3.2.17/lib/active_support/callbacks.rb:385:in `_run_process_action_callbacks'  /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/activesupport-3.2.17/lib/active_support/callbacks.rb:81:in `run_callbacks'  /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/abstract_controller/callbacks.rb:17:in `process_action'  /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/action_controller/metal/rescue.rb:29:in `process_action'  /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/action_controller/metal/instrumentation.rb:30:in `block in process_action'  /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/activesupport-3.2.17/lib/active_support/notifications.rb:123:in `block in instrument'  /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/activesupport-3.2.17/lib/active_support/notifications/instrumenter.rb:20:in `instrument'  /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/activesupport-3.2.17/lib/active_support/notifications.rb:123:in `instrument'  /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/action_controller/metal/instrumentation.rb:29:in `process_action'  /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/action_controller/metal/params_wrapper.rb:207:in `process_action'  /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/activerecord-3.2.17/lib/active_record/railties/controller_runtime.rb:18:in `process_action'  /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/abstract_controller/base.rb:121:in `process'  /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/abstract_controller/rendering.rb:45:in `process'  /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/action_controller/metal.rb:203:in `dispatch'  /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/action_controller/metal/rack_delegation.rb:14:in `dispatch'  /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/action_controller/metal.rb:246:in `block in action'  /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/action_dispatch/routing/route_set.rb:73:in `call'  /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/action_dispatch/routing/route_set.rb:73:in `dispatch'  /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/action_dispatch/routing/route_set.rb:36:in `call'  /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/journey-1.0.4/lib/journey/router.rb:68:in `block in call'  /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/journey-1.0.4/lib/journey/router.rb:56:in `each'  /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/journey-1.0.4/lib/journey/router.rb:56:in `call'  /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/action_dispatch/routing/route_set.rb:608:in `call'  /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/rack-openid-1.3.1/lib/rack/openid.rb:98:in `call'  /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/action_dispatch/middleware/best_standards_support.rb:17:in `call'  /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/rack-1.4.5/lib/rack/etag.rb:23:in `call'  /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/rack-1.4.5/lib/rack/conditionalget.rb:25:in `call'  /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/action_dispatch/middleware/head.rb:14:in `call'  /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/action_dispatch/middleware/params_parser.rb:21:in `call'  /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/action_dispatch/middleware/flash.rb:242:in `call'  /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/rack-1.4.5/lib/rack/session/abstract/id.rb:210:in `context'  /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/rack-1.4.5/lib/rack/session/abstract/id.rb:205:in `call'  /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/action_dispatch/middleware/cookies.rb:341:in `call'  /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/activerecord-3.2.17/lib/active_record/query_cache.rb:64:in `call'  /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/activerecord-3.2.17/lib/active_record/connection_adapters/abstract/connection_pool.rb:479:in `call'  /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/action_dispatch/middleware/callbacks.rb:28:in `block in call'  /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/activesupport-3.2.17/lib/active_support/callbacks.rb:405:in `_run__1805290955544829105__call__1486932417638469082__callbacks'  /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/activesupport-3.2.17/lib/active_support/callbacks.rb:405:in `__run_callback'  /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/activesupport-3.2.17/lib/active_support/callbacks.rb:385:in `_run_call_callbacks'  /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/activesupport-3.2.17/lib/active_support/callbacks.rb:81:in `run_callbacks'  /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/action_dispatch/middleware/callbacks.rb:27:in `call'  /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/action_dispatch/middleware/remote_ip.rb:31:in `call'  /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/action_dispatch/middleware/debug_exceptions.rb:16:in `call'  /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/action_dispatch/middleware/show_exceptions.rb:56:in `call'  /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/railties-3.2.17/lib/rails/rack/logger.rb:32:in `call_app'  /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/railties-3.2.17/lib/rails/rack/logger.rb:16:in `block in call'  /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/activesupport-3.2.17/lib/active_support/tagged_logging.rb:22:in `tagged'  /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/railties-3.2.17/lib/rails/rack/logger.rb:16:in `call'  /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/action_dispatch/middleware/request_id.rb:22:in `call'  /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/rack-1.4.5/lib/rack/methodoverride.rb:21:in `call'  /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/rack-1.4.5/lib/rack/runtime.rb:17:in `call'  /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/activesupport-3.2.17/lib/active_support/cache/strategy/local_cache.rb:72:in `call'  /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/rack-1.4.5/lib/rack/lock.rb:15:in `call'  /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/action_dispatch/middleware/static.rb:63:in `call'  /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/rack-cache-1.2/lib/rack/cache/context.rb:136:in `forward'  /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/rack-cache-1.2/lib/rack/cache/context.rb:245:in `fetch'  /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/rack-cache-1.2/lib/rack/cache/context.rb:185:in `lookup'  /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/rack-cache-1.2/lib/rack/cache/context.rb:66:in `call!'  /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/rack-cache-1.2/lib/rack/cache/context.rb:51:in `call'  /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/railties-3.2.17/lib/rails/engine.rb:484:in `call'  /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/railties-3.2.17/lib/rails/application.rb:231:in `call'  /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/railties-3.2.17/lib/rails/railtie/configurable.rb:30:in `method_missing'  /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/rack-1.4.5/lib/rack/builder.rb:134:in `call'  /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/rack-1.4.5/lib/rack/urlmap.rb:64:in `block in call'  /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/rack-1.4.5/lib/rack/urlmap.rb:49:in `each'  /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/rack-1.4.5/lib/rack/urlmap.rb:49:in `call'  /var/www/html/redmine/ruby/lib/ruby/gems/1.9.1/gems/passenger-4.0.40/lib/phusion_passenger/rack/thread_handler_extension.rb:74:in `process_request'  /var/www/html/redmine/ruby/lib/ruby/gems/1.9.1/gems/passenger-4.0.40/lib/phusion_passenger/request_handler/thread_handler.rb:141:in `accept_and_process_next_request'  /var/www/html/redmine/ruby/lib/ruby/gems/1.9.1/gems/passenger-4.0.40/lib/phusion_passenger/request_handler/thread_handler.rb:109:in `main_loop'  /var/www/html/redmine/ruby/lib/ruby/gems/1.9.1/gems/passenger-4.0.40/lib/phusion_passenger/request_handler.rb:448:in `block (3 levels) in start_threads'  ...  No close tag for /lists/list  Line: 4  Position: 93  Last 80 unconsumed characters:    Output was:   <?xml version="1.0" encoding="UTF-8"?>  <lists>  <list     path="svn://127.0.0.1/voxysuser">      Rendered common/error.html.erb within layouts/base (0.1ms)  Completed 404 Not Found in 69.1ms (Views: 15.1ms | ActiveRecord: 3.0ms)  

How can I resolve this problem? I googled it, but similar problem fixed should be fixed 3 years ago.

I'm installing the latest Bitnami Redmine 2.5.1-1 stack.

UPDATE Well, I found next way. If I use the http protocol it works fine, but I should remove access for svn by web. That's why I create virtual host on localhost and get info from svn use 127.0.0.1 IP.

<VirtualHost 127.0.0.1:8000>  <Location /repo>                                        DAV svn                                                                               SVNPath "PATH_TO_MY_REPOSITORY"                                           </Location>  

And this it work good.

zimbra Error when installing

Posted: 31 Jul 2021 10:03 PM PDT

Can you guys help? I have set up an ubuntu server 12.04 for Zimbra, I download zcs 8.0.2 and put it in /opt/zimbra. but when I run install.sh, even though prerequisites ARE Found and I Agree to the License and select packages to intall, the folder is removed and I get the following error:

./install.sh line 339: /opt/zimbra/libexec/zmsetup.pl: No such file or directory

And when I go back to looking for that .pl I don't find anything, everyhting's wiped out !

Any thoughts? Thanks

Permanently assigning IP address for an embedded device

Posted: 31 Jul 2021 09:05 PM PDT

This is a follow-up to Embedded device configured with bad IP address, can I still connect?

We make embedded devices that run Linux. Users can change the networking configuration of the device (static IP, DHCP client and server). Zeroconf was supposed to be the fallback when a user forgets static IP was assigned, but Zeroconf seems spotty in implementation. Connecting a Windows client frequently results in the client getting a link-local address that cannot communicate with the device.

There is no hardware reset button, sadly. I know what MAC address each device has, but I don't know how to use that information because the device's networking stack rejects data unless I know its IP address.

Would it be bad to statically assign a secondary IP address in the link-local range (169.254.0.0/16) to eth0:0? That way I can write a restore utility that will work when the device is directly connected to a client. (No routers involved, but possibly a switch)

What happens if two of our devices are on the same network with the same link-local IP address? They will have different primary IP addresses.

Some similar products hard code a private IP (i.e. 192.168.1.2) for this particular situation.

E: Sub-process /usr/bin/dpkg returned an error code (100)

Posted: 31 Jul 2021 05:41 PM PDT

I am running on xen, Debian 5.0-i386-default. I haven't touched my vps in 2 months then last night I ran the following command:

  myserver:/usr/bin# apt-get upgrade    Reading package lists... Done  Building dependency tree  Reading state information... Done    The following packages have been kept back:  makepasswd    The following packages will be upgraded:  libc6 libc6-dev libc6-xen libmysqlclient15off locales mysql-client mysql-client-5.0 mysql-  common mysql-server mysql-server-5.0    10 upgraded, 0 newly installed, 0 to remove and 1 not upgraded.    Need to get 0B/50.1MB of archives.    After this operation, 483kB of additional disk space will be used.  Do you want to continue [Y/n]? y    Preconfiguring packages ...    E: Sub-process /usr/bin/dpkg returned an error code (100)  

I googled and it seems to be a permission thing for "dpkg". However, I cd into /usr/bin and there's no dpkg binary!!! Please help thanks

No comments:

Post a Comment