-
Senior Member
registered user
Rock,
This is interesting. I checked /etc/openmosix.map on the cd and on my hdinstall and they are the same. Interestingly they are both completely commented out. I'm interested as to what you changed. I was thinking that the problem related to nfs shares.
After I did my hdinstall I got the following error from my client machine when I tried to boot it.
"Can't NFSmount KNOPPIX filesystem, sorry."
I looked in /etc/init.d/knoppix-terminalopenmosixserver script and saw that the nfs share being exported is /cdrom. I figured this was the problem so I made some changes to the script but no luck.
Could you please post your /etc/openmosix.map file?
Thanks.
Adam
-
Senior Member
registered user
Originally Posted by
aay
After I did my hdinstall I got the following error from my client machine when I tried to boot it.
If you hd-install Knoppix, most hardware detection (kernel modules will still their job fine off coarse) is removed. So you can't really use it for clients, unless they have a very similar build-up.
You can, off coarse, hd-install Knoppix on all machines (they need a harddrive then), and start them as you would otherwise. They will join the cluster via autodiscovery, AFAIK...
You may need to put up some 'server' thingy that will answer the autodiscovery calls though. Check out the openMosix website for such info, it has good documentation. But off coarse, this should be handled by the hd-install script in the future
-
Senior Member
registered user
Harware detection isn't really an issue I don't think.
I'm not changing any hardware on the server anytime soon and it was properly set up at install.
The client will just grab a boot image from the server and then do it's own hardware detection (I think).
Adam
-
Senior Member
registered user
So clusterKNOPPIX will automatically migrate processes to use other CPUs' resources. This must mean that we're sharing all resources over the network such as RAM, right? OK, I want my SuperComputer now!
Regards,
-
Senior Member
registered user
Originally Posted by
A. Jorge Garcia
This must mean that we're sharing all resources over the network such as RAM, right?
WWJD? JWRTFM!
The openMosix docs state that the shared memory system is still experimental. You can compile it in by uncommenting "#define ALPHA", but I guess this guy didn't do that. btw, the shared memory system they are using is quite elegant. They give different latencies (=ping?) to the parts of virtual memory that is outside of the local machine, the kernel then tries to move processes closest to the lowest latency mem segment.
-
Junior Member
registered user
Originally Posted by
aay
Rock,
This is interesting. I checked /etc/openmosix.map on the cd and on my hdinstall and they are the same. Interestingly they are both completely commented out. I'm interested as to what you changed. I was thinking that the problem related to nfs shares.
After I did my hdinstall I got the following error from my client machine when I tried to boot it.
"Can't NFSmount KNOPPIX filesystem, sorry."
I looked in /etc/init.d/knoppix-terminalopenmosixserver script and saw that the nfs share being exported is /cdrom. I figured this was the problem so I made some changes to the script but no luck.
Could you please post your /etc/openmosix.map file?
Thanks.
Adam
I've been looking into that NFSmount problem, it seems to be a knoppix problem ;p
You'll get the same error when you install a default Knoppix and try to use the terminalserver. But when I've got some time I'll try to fix it in ClusterKnoppix.
The other problem about only ext2/reiserfs working seems to lay with openmosix, which doesn't seem to play nice with ext3 and pivot_root.
Also XFS isn't supported because I couldn't get the OpenMosix patch and the XFS patch merged. (also the openmosix patch I use is a prerelease of the upcoming openmosix-3)
PS: the NFS problem has nothing to do with the outcommented openmosix.map
-
Junior Member
registered user
Originally Posted by
RockMumbles
What I did yesterday was a clusterKnoppix hd install and found out that it would only act as the openMosix terminal server when running from cd. So I went to the openMosix Howto and did a bit of reading and created the proper /etc/openmosix.map file (or /etc/hpc.map if you have problems look for the /etc/hpc.map and either edit it or get rid of it) and got openMosix to startup on that machine. Then I got the openMosix 2.4.20 kernel image from the clusterKnoppix site and put it on a Morphix hd install, then also did an apt-get for openmosix (the tools) and openmosixview. I put the same /etc/openmosix.map file on that machine, started openmosix (/etc/init.d/openmosix start) and I had a two machine cluster up and running, that easy!
The default debian packages for Openmosix/openmosixview are way outdated. I'm using autodiscovery on ClusterKnoppix which doesn't need an openmosix.map or /etc/hpc.map file, thats why it was outcommented and not used.
-
Senior Member
registered user
Originally Posted by
dolphin
The other problem about only ext2/reiserfs working seems to lay with openmosix, which doesn't seem to play nice with ext3 and pivot_root.
Does this explain why the clusterKnoppix CD doesn't boot vey well on my machine? Because it mounted an ext3 partition?
-
Junior Member
registered user
Originally Posted by
Henk Poley
Originally Posted by
dolphin
The other problem about only ext2/reiserfs working seems to lay with openmosix, which doesn't seem to play nice with ext3 and pivot_root.
Does this explain why the clusterKnoppix CD doesn't boot vey well on my machine? Because it mounted an ext3 partition?
No, that should work, It only gives a problem when you install it to your harddisk and uses an ext3 partition as your root. Thats when the pivot_root comes into action (its executed from linuxrc on the initrd image)
What exactly is your problem with the ext3?
-
Senior Member
registered user
OK, calm down, just thinking out-loud while reading the online dox....
Anyway, I think I get it now. Since we are not running specially designed and compiled apps for supercomputing (ala parallel processing, pipelining, number crunching type apps), what happens is that when you start an ordinary app, clusterKNOPPIX will find the node with the most resources available to run it.
Well, this could still benefit my new lab at school. We could setup one clusterKNOPPIX server. Then my students each boot-up a clusterKNOPPIX CD. As more and more students add nodes to the cluster, we get more and more resources added and overall performance should improve.
Regards,
Similar Threads
-
By eris_pluvia in forum Hdd Install / Debian / Apt
Replies: 0
Last Post: 10-19-2004, 03:15 PM
-
By YannLeMerle in forum General Support
Replies: 0
Last Post: 03-07-2004, 04:33 PM
-
By MeSo in forum Networking
Replies: 1
Last Post: 02-17-2004, 02:30 PM
-
By law_student in forum Customising & Remastering
Replies: 12
Last Post: 06-16-2003, 11:52 PM
-
By anibal in forum Hdd Install / Debian / Apt
Replies: 5
Last Post: 06-06-2003, 05:35 PM
Posting Permissions
- You may not post new threads
- You may not post replies
- You may not post attachments
- You may not edit your posts
-
Forum Rules
LSI 9305-16i SATA SAS 12Gbs RAID Controller Host Bus Adapter PCIe 3.0 x8 IT-Mode
$199.99
ATTO R608 8-Port 6Gb/s SAS RAID Controller
$33.99
Adaptec ASR-81605ZQ 12G SAS 16-Port 1GB Cache PCIe x8 RAID Controller
$131.99
Dell UCSA-901 0101A6100-000-G SAS PCI-E Raid Controller Card
$37.50
Inspur LSI 9300-8i Raid Card 12Gbps HBA HDD Controller High Profile IT MODE
$15.98
LSI MegaRAID 9361-8i 12Gb PCIe 8-Port SAS/SATA RAID 1Gb w/BBU/CacheVault/License
$39.95
ORICO Multi Bay RAID Hard Drive Enclosure USB 3.0/ Type-C For 2.5/3.5'' HDD SSDs
$87.99
LSI MegaRaid 9361-8i 12Gbps SAS / SATA Raid Controller PCIe x8 3.0 Tested
$29.00
4 Bay RAID External Hard Drive Enclosure for 2.5/3.5" SATA HDD/SSD
$79.99
Oracle 7332895 LSI MegaRAID 9361-16i 16-PORT 12GB SAS PCIE RAID CONTROLLER
$349.00