Linux!
-
SargeBaldy
- Posts: 366
- Joined: Tue Jul 15, 2003 3:49 pm
- Location: Oregon
-
Jim
- Posts: 535
- Joined: Mon Aug 11, 2003 10:56 am
It is not really true that forking is just a bad thing. There are quite a few forks of many free software projects, including the kernel. Quite often, they are used to implement features that would cause disruptive changes to the original code base.warlock wrote:Usually people collaborate on free software projects and forking is considered a bad thing.
Also quite often, the more successful forks are partially reintigrated into the main project. Although you wish for this to happen, it is naturally completely up to Randy whether he wishes to do this.
-
NiGHTMARE
- Posts: 3463
- Joined: Sat Jul 19, 2003 8:39 am
-
Graf Zahl
- Lead GZDoom+Raze Developer

- Posts: 49252
- Joined: Sat Jul 19, 2003 10:19 am
- Location: Germany
-
Anonymous
I can hardly believe those could be significant problems. Coders frequently use version systems such as CVS. They tend to make life easier rather than harder. Also, I don't view receiving code patches to my project as a nuisance but a blessing. Why should anyone have any objections to someone else helping you making it better?I would assume that Randy doesn't want to have to be constantly uploading his internal changes to some server, or checking that server daily for changes he now has to integrate into the main source. It's his port, let him do what he wants with it.
There are experimental projects that fork the kernel (such as SELinux), the development series which you could sort of call a fork and the individual developer patch-series (-ac, -ck, etc) which I don't really consider forks but the most important thing is that all the useful and proven changes get contributed back to the main source tree and the Linux kernel development is really concentrated (as opposed to the BSD kernel development - FreeBSD, NetBSD, OpenBSD and even MacOS X all have their own kernel versions that are developed individually although they all trace back to the same codebase).Jim wrote: It is not really true that forking is just a bad thing. There are quite a few forks of many free software projects, including the kernel. Quite often, they are used to implement features that would cause disruptive changes to the original code base.
Forks usually happen when there's a uncompromisable design goal or personal difference between two parties (Xemacs and the imminent XFree86 fork come to mind). If you're developing a new feature it's true that you should not just slap it in half-finished into a otherwise stable codebase but it should hopefully eventually get folded in.
You're both right in that. I am simply voicing my wish on a forum that many other zdoom players and the main developer read.Also quite often, the more successful forks are partially reintigrated into the main project. Although you wish for this to happen, it is naturally completely up to Randy whether he wishes to do this.
Zdoom is a wonderful source port as it is. It easily became my favourite the first time I tried it. I just realized that what I've been doing with several different source ports could all be in the best of them. The topic of this thread also relates to this in that neither Zdaemon nor ZdoomGL have GNU/Linux versions and they're right now completely inaccessible to me. I just wanted to ask "why" the current situation is what it is. (and maybe improve it if anyone agrees)
-
HotWax
- Posts: 10002
- Joined: Fri Jul 18, 2003 6:18 pm
- Location: Idaho Falls, ID
The answer is simple... whatever your beliefs or feelings are, you're not Randy Heit. ZDoom is his, and he can do with it what he likes. (Within the license, obviously)
One reason why he might not want to turn ZDoom into a community project is because it's easier to keep track of changes and track down bugs when all of the source code has been done by the person doing the tracking.
ZDoom source code is free to download and do whatever you like with. ZDaemon and ZDoomGL are only two such spin-offs. If you really want a community-driven version, take the code and do it yourself. Then you can have your CVS tree and Randy can have his "pure" ZDoom.
One reason why he might not want to turn ZDoom into a community project is because it's easier to keep track of changes and track down bugs when all of the source code has been done by the person doing the tracking.
ZDoom source code is free to download and do whatever you like with. ZDaemon and ZDoomGL are only two such spin-offs. If you really want a community-driven version, take the code and do it yourself. Then you can have your CVS tree and Randy can have his "pure" ZDoom.
-
Hirogen2
- Posts: 2033
- Joined: Sat Jul 19, 2003 6:15 am
- Operating System Version (Optional): Tumbleweed x64
- Graphics Processor: Intel with Vulkan/Metal Support
- Location: Central Germany
You do not always need the latest edge-bleeding software. With me having XFree86-4.3.0-111-RPM installed, I am pretty happy... and it's new too. If you need something compiled specifically for your processor, you can always get the SRPM and recompile ... without any problems even, as SRPMs are tweaked to your distro. They have to compile.Installing Linux from scratch can certainly be entertaining (not to mention time-consuming) [...] and use it right away without having to wait for somebody else to create an RPM for it.
Maybe I suggest something light-weight? icewm, fluxbox, windowmaker?But first I must figure out why I can't change Metacity's theme with the GNOME theme manager...
-
Jim
- Posts: 535
- Joined: Mon Aug 11, 2003 10:56 am
Yes, but he wants to be able to get anything as soon as it is released and compile it himself without messing up the rpm database in the process.Hirogen2 wrote:You do not always need the latest edge-bleeding software.Randy wrote: Installing Linux from scratch can certainly be entertaining (not to mention time-consuming) [...] and use it right away without having to wait for somebody else to create an RPM for it.
Note that it is very possible (and not that difficult) to create rpms yourself. That is what I do when I want to try something really bleeding edge or simply too obscure to have any pre-made packages. If you are going to be doing that all the time though, you probably might as well just use a source based distribution. (Or if you are really looking for a challenge, you can build your own distribution from scratch as Randy did).
That isn't really that helpful to him. He wants to use GNOME, with Metacity as the window manager, but using a custom theme. Telling him to use a different window manager isn't going to help him any. (I use KDE, but I'm not going to suggest that he drop GNOME just because I think KDE is a little betterHirogen2 wrote:Maybe I suggest something light-weight? icewm, fluxbox, windowmaker?
-
Hirogen2
- Posts: 2033
- Joined: Sat Jul 19, 2003 6:15 am
- Operating System Version (Optional): Tumbleweed x64
- Graphics Processor: Intel with Vulkan/Metal Support
- Location: Central Germany
-
Jim
- Posts: 535
- Joined: Mon Aug 11, 2003 10:56 am
The base of KDE is very well designed and written. However, extensive use of C++ features naturally tends to make things load slower. However, thanks to on-going optimizations (and improvements in the speed of G++'s generated code), it loads much faster than any previous version. KDE is about as responsive as GNOME.Hirogen2 wrote:KDE is bloated. It loads just as long as Windows. Ocassionally slower.
Naturally, a stripped-down window manager is going to load faster and be more responsive, all other things being equal. You simply cannot fairly compare something like ice, fluxbox, or windowmaker in terms of speed. If you care primarily about speed or like a window manager with few bells and whistles, they are a good choice.
GNOME and KDE are designed with different goals than those of any of the various minimalist window managers. Besides, if you won't take my word for it, Linus Torvalds has often pointed to KDE as a well-run open source project that focuses on results. If you don't care about that, so what, I certainly would generally place more weight on Linus's opinion than on that of some random Linux user of whom I know very little.
-
QBasicer
- Posts: 766
- Joined: Tue Sep 16, 2003 3:03 pm
-
Hirogen2
- Posts: 2033
- Joined: Sat Jul 19, 2003 6:15 am
- Operating System Version (Optional): Tumbleweed x64
- Graphics Processor: Intel with Vulkan/Metal Support
- Location: Central Germany
-
Anonymous
HotWax:
I'm not interested in further forking Zdoom. I just told what I would like to see, a better zdoom. We can speculate why this isn't the case to our heart's extent but only Randy can answer the questions you are attempting to on his behalf. Furthermore I suspect that the answers aren't that simple. Zdaemon, for example may well just have never intented to submit patches to original zdoom for a reason or another. In any case, there's no point in trying to prove how the current situation is better or if it really is, then Randy is probably the best person to tell us that.
Now, if I could play Daedalus.. That'd really make me happy. I hope the GNU/Linux version of zdoom will be resurrected soon.
I'm not interested in further forking Zdoom. I just told what I would like to see, a better zdoom. We can speculate why this isn't the case to our heart's extent but only Randy can answer the questions you are attempting to on his behalf. Furthermore I suspect that the answers aren't that simple. Zdaemon, for example may well just have never intented to submit patches to original zdoom for a reason or another. In any case, there's no point in trying to prove how the current situation is better or if it really is, then Randy is probably the best person to tell us that.
Now, if I could play Daedalus.. That'd really make me happy. I hope the GNU/Linux version of zdoom will be resurrected soon.
-
Jim
- Posts: 535
- Joined: Mon Aug 11, 2003 10:56 am
DCOP is the framework for interprocess communication in KDE. Besides allowing you to do a lot of neat things, the standard applications depend on these things running. It is more efficient to have a single daemon than to have many processes all performing the same processing. Additionally, you can make scripts using different KDE applications to accomplish many tasks. KDE 3.1 is significantly faster than the previous version. Even though it still does take a little while to load initially, things are pretty snappy afterwords.Hirogen2 wrote:k, then it must have been KDE 3.0.1 on the other 800MHz machine... But honestly, it loads applications (dcopserver, mcop, kioserv... etc etc etc) I don't seem to use.
You do not seem to understand that having lots of daemons running in the background does not significantly affect performance because these processes do not consume much CPU time when they are idle.
In fact, kernel version 2.6 (should be out within a month or so) will improve this even more. It is tuned to allow interactive applications to gain additional priority. Essentially, an application that gives up its processor time slice early gains a little priority boost, meaning that it will be able to get on the processor sooner than an application that always attempts to run as long as possible. The idea is that applications that sleep alot are those that are doing lots of I/O, which often indicates a lot of interaction with the user. This means that everything will feel faster, and more reactive. It is quite an ingenius way to make X, which does not get special treatment by the kernel (which is what happens for the Windows GUI), be more reactive nonetheless.
-
HotWax
- Posts: 10002
- Joined: Fri Jul 18, 2003 6:18 pm
- Location: Idaho Falls, ID