Giant Memory Leaks and CPU overload. Enough is enough!
Added by Sebastian Vassiliou about 15 years ago
Sorry guys if I sound a bit upset right now, but actually I am.
I have been running Redmine since February 2008, and have had problems all the time. No matter what version of Redmine, no matter if running with Mongrel or Passenger.
After some time the thing consumes 100% CPU and fills all available RAM, eventually killing the machine. I also tried with a fresh install, no difference. It appears I am not the only one suffering from such problems, and it is a surprise to me that apparently nothing has been done to fix these problems.
As said, I have been going through several Redmine versions, on Debian 4.0 then 5.0, with mongrel and with passenger, upgrading rails and shit all along with gem update.
This keeps happening so often lately that I really got fed up with it and had to shut down the server, bringing my projects to a complete halt for now.
So I have two options: either this problem can get solved somehow, or I need to migrate to some other solution, preferably non-rails such as trac or mantis.
I am willing to cooperate to fix this problem once and for all, if you just tell me what you need from me.
Or, if someone has heard about the possibility to migrate from Redmine to any other bug tracker, I will take anything. Really. I am that desperate :(
Replies (6)
RE: Giant Memory Leaks and CPU overload. Enough is enough! - Added by Felix Schäfer about 15 years ago
Gordon Shumway wrote:
It appears I am not the only one suffering from such problems
Just to weigh in on the positive side: I'm running passenger+apache with MySQL on gentoo, no problems here.
Could you please specify which versions of ruby, rails, and which DB you are running? What is your typical traffic on the beast, what powers the server, do you have any other rails app running that doesn't show any signs of resource hogging, and so on. I suppose in either mongrel's or passenger's case, you have taken the time to sift through the configs and adapted them to your server?
RE: Giant Memory Leaks and CPU overload. Enough is enough! - Added by Sebastian Vassiliou about 15 years ago
Yeah, I assume it works fine for most people... that makes me even more sad :(
Ok, the thing is running on a Debian 5.0 64 bit system, using a MySQL database.
Right now, actually before i shut it down, Redmine was running on a single Mongrel process (to avoid having both cpu corse go up to 100%) proxied to Apache.
There is no other rail app whatsoever running on that machine. The traffic is not that high either, it's just two projects with very few developers and not that much going on. The access log does not show unusual high usage for that day where the leak basically exploded.
I followed howtos for both mongrel and passenger at the time, but since i get the problem with both, i doubt a non-optimized configuration would lead to high cpu usage and such huge memory leaks...
this is the gem package list:
hannibal:~# gem list *** LOCAL GEMS *** actionmailer (2.3.4, 2.3.3, 2.3.2, 2.2.2, 2.1.2, 2.1.1, 2.1.0) actionpack (2.3.4, 2.3.3, 2.3.2, 2.2.2, 2.1.2, 2.1.1, 2.1.0) activerecord (2.3.4, 2.3.3, 2.3.2, 2.2.2, 2.1.2, 2.1.1, 2.1.0) activeresource (2.3.4, 2.3.3, 2.3.2, 2.2.2, 2.1.2, 2.1.1, 2.1.0) activesupport (2.3.4, 2.3.3, 2.3.2, 2.2.2, 2.1.2, 2.1.1, 2.1.0) builder (2.1.2) camping (1.5.180, 1.5) cgi_multipart_eof_fix (2.5.0) daemons (1.0.10) fastthread (1.0.7, 1.0.6, 1.0.4, 1.0.1) gem_plugin (0.2.3) markaby (0.5) metaid (1.0) mongrel (1.1.5) mongrel_cluster (1.0.5) mysql (2.8.1, 2.7) passenger (2.2.5, 2.2.4, 2.2.2, 2.2.1, 2.2.0, 2.1.3, 2.1.2, 2.0.6, 2.0.5, 2.0.3) rack (1.0.1, 1.0.0, 0.9.1, 0.4.0) rails (2.3.4, 2.3.3, 2.3.2, 2.2.2, 2.1.2, 2.1.1, 2.1.0) rake (0.8.7, 0.8.5, 0.8.4, 0.8.3, 0.8.2, 0.8.1) rubygems-update (1.3.5, 1.3.4, 1.3.3, 1.3.2, 1.3.1) test-spec (0.10.0, 0.9.0)
and
hannibal:~# ruby --version ruby 1.8.7 (2008-08-11 patchlevel 72) [x86_64-linux]
I had huge memory leaks with redmine, subversion and apache in the past... - Added by Arnaud Martel about 15 years ago
- CentOS 5.3
- Redmine with Passenger
- Apache
- Subversion with webdav access only
- cron job scheduled every 10 minutes to process all subversion changesets
With this configuration, I noticed that every cron job consumes additional memory and, after a while, my box used swap file then CPU reach 100% (when swap became full...). With 40 projects managed with REDMINE, I had to reboot my box every 2 days!!
I spent a lot of times to analyze what appened and, finaly, I found a solution:
I installed svnserve and configured each project (for the repository access) to use svn://127.0.0.1/svn/repository instead of http://127.0.0.1/svn/repository
Since this modification, I never had a memory leak anymore...
I don't know if it will solve your problem too...
RE: Giant Memory Leaks and CPU overload. Enough is enough! - Added by Sebastian Vassiliou about 15 years ago
hmmm, i switched from http to svnserve for the repos, and ill see how it goes.
however, before some weeks ago i was running redmine with the repositories completely disabled, and still had leaks and lockups. but perhaps not that frequent.
RE: Giant Memory Leaks and CPU overload. Enough is enough! - Added by Sebastian Vassiliou about 15 years ago
1100 28708 38.3 25.7 1094360 527092 ? Rl Nov04 328:58 /usr/bin/ruby1.8 /usr/bin/mongrel_rails start -p3000 -e production
AGAIN!!! 100% CPU, 518 MB RAM no wait 520 MB as I write this, 1074 MB swap.
WTF!!!!
and yes, i use svnserve now for svn, and upgraded to the latest redmine version 0.8.6.
please come up with ANY solution which can import from redmine.
thank you.
:(
P.S. 522 MB RAM right now... will have to kill redmine before it kills the whole server. again.
RE: Giant Memory Leaks and CPU overload. Enough is enough! - Added by Sebastian Vassiliou about 15 years ago
i disabled the repository module completely, and redmine worked fine for about 2 days, but right now i see my whole server became unresponsive and look at htop and guess what... 844 MB ram and 2242 MB swap.
that's it folks. for as much as i like redmine's features, i just can't afford anymore to live with something that regularly kills my production server.
i will manually migrate my users to some other solution and start basically from scratch, losing years of issue tracking, but hey, i will feel relieved.
good bye and good luck everyone.