I've been following the discussions about license proliferation that have been going on the past few weeks.
I'm really glad that they are bringing up all those issues now because they are very closely related to the subject of my master's dissertation and it will surely enrich my research.
But one thought that crossed my mind today was that everyone participating in that discussion knows somehow what licenses should be recommended. But if they just say "ok, here are the licenses" people will complain that they just made that up. So what they are doing is a methodology to choose licenses, and apparently this they can just make up, expecting much less complains.
2008-10-03
2008-09-19
2008-09-03
World Day Against Software Patents
September 24th will be the World Day Against Software Patents.
Although the day celebrates an achievement that has been made at the European Parliament five years ago, there is still a lot to be done. Visit http://stopsoftwarepatents.org/ to learn what else can me accomplished and how you can help.
Although the day celebrates an achievement that has been made at the European Parliament five years ago, there is still a lot to be done. Visit http://stopsoftwarepatents.org/ to learn what else can me accomplished and how you can help.
2008-08-22
Poetic License
Found this license today. It is cute.
(c) 2005 Alexander E Genaud
This work ‘as-is’ we provide.
No warranty express or implied.
We’ve done our best,
to debug and test.
Liability for damages denied.
Permission is granted hereby,
to copy, share, and modify.
Use as is fit,
free or for profit.
These rights, on this notice, rely.
(c) 2005 Alexander E Genaud
2008-08-17
Sharing the responsibilities
Just a rant: when you aren't going to do things right, let other people do it.
2008-08-06
Wikia Evolution
Today the Wikia Search team announced something that dramatically changed how easy it is to help build a better search engine. It is a new Firefox toolbar that lets you add sites and metadata about them to the Wikia Search index in a few clicks, without ever leaving the page. You can also add search results from Google or Yahoo directly to Wikia Search.
Now, if only they could figure out a way to better handle the diversity of languages spoken in the web, that would be great. It bothers me a little how much German gets in my way when doing Wikia Search stuff, and also I'd like to add more Portuguese content, so this area has lots of room for improvement. As far as I know Google is the only search engine that tries to address language issues in search results, but I don't like how they do it either, because you can't choose multiple languages when searching.
I'd say let's wait and see, but actively shaping it is much more fun than waiting ;)
Now, if only they could figure out a way to better handle the diversity of languages spoken in the web, that would be great. It bothers me a little how much German gets in my way when doing Wikia Search stuff, and also I'd like to add more Portuguese content, so this area has lots of room for improvement. As far as I know Google is the only search engine that tries to address language issues in search results, but I don't like how they do it either, because you can't choose multiple languages when searching.
I'd say let's wait and see, but actively shaping it is much more fun than waiting ;)
2008-07-07
Blogging
Since I've read these slides by Jyri Engeström at an OpenSocial meeting I've been thinking if writing shorter and more frequent blogs would be the right way to go.
At the moment I write at two blogs and one microblog. But I also have five inactive blogs.
Here I try to discuss ideas that some people (including me) are really interested in thinking about a little further, to understand what is going on in the environment around us and how it makes a difference in our lives, so that maybe we can shape the future to our best interest. The posts usually have a minimum length so that the ideas can be expressed with a beginning, middle and end. At my other blog I write short news on a more general and popular theme, geek stuff. If I count the number of words written at each blog, they are more or less the same, although at my other blog they are spread in a larger number of posts.
Now, let's see some numbers from Google Analytics. For the blog with shorter posts I have four times the number of visitors I have here, but the average time on site is exactly the same. That almost makes me think I should focus on writing shorter posts with more frequency. On the other hand, the feedback I get from the readers of this blog is much more insightful. So I'd be trading quantity over quality. And I don't want to do that.
However, I can't say there is a linear function between the length of the posts and the value it adds to my own life. When it comes to microblogging, if I interact with the right community of people I can extract useful information while not spending too much time writing or filtering a flood of posts.
Of course, considering that the content of my blogs are very different from each other and that my audiende is very limited I can't really generalize my findings. But one thing that every blogger should keep in mind is who is the target audience, what they expect from you, and what you expect from them.
At the moment I write at two blogs and one microblog. But I also have five inactive blogs.
Here I try to discuss ideas that some people (including me) are really interested in thinking about a little further, to understand what is going on in the environment around us and how it makes a difference in our lives, so that maybe we can shape the future to our best interest. The posts usually have a minimum length so that the ideas can be expressed with a beginning, middle and end. At my other blog I write short news on a more general and popular theme, geek stuff. If I count the number of words written at each blog, they are more or less the same, although at my other blog they are spread in a larger number of posts.
Now, let's see some numbers from Google Analytics. For the blog with shorter posts I have four times the number of visitors I have here, but the average time on site is exactly the same. That almost makes me think I should focus on writing shorter posts with more frequency. On the other hand, the feedback I get from the readers of this blog is much more insightful. So I'd be trading quantity over quality. And I don't want to do that.
However, I can't say there is a linear function between the length of the posts and the value it adds to my own life. When it comes to microblogging, if I interact with the right community of people I can extract useful information while not spending too much time writing or filtering a flood of posts.
Of course, considering that the content of my blogs are very different from each other and that my audiende is very limited I can't really generalize my findings. But one thing that every blogger should keep in mind is who is the target audience, what they expect from you, and what you expect from them.
2008-06-18
Firefox Viral Marketing
Looks like the viral marketing has been working well for Firefox 3. You can keep track of the progression at the download counter page.
It would be interesting if they released a graph of downloads per hour.
It would be interesting if they released a graph of downloads per hour.
Wikia Search Again
Two weeks after my last rant, I just would like to say that I feel things are working better in the project now. People are being able to discuss ideas and there is better communication about what is going on.
2008-06-03
Wikia Search New Launch
Today a message has been sent by Jimmy Wales to the Wikia Search Mailing List:
My first question is how the other people who wrote reviews knew about it already. Did they find out by themselves as soon as the got up in the morning, or were they told about it? And in this second case, they were told by whom? Anyway, the real question is how come the list of people who are really interested in the project never knows what is going on in it?!
Is Jimmy Wales following Jason Calacanis strategy and thinking the main advantage of having a community is doing viral marketing of their product? But hey, the people subscribed to the Wikia Search list knew that the new features of Mahalo were being launched before it was publicly announced! So looks like Calacanis is doing a much better job at that. To be fair, Jeremie Miller told us they planned to launch the new features sometime (almost one month ago), when John McCormac asked if there was any updates. Now let's compare that with the almost daily updates that I received when I was in the Mahalo list.
So, basically, this is how it works: if you are a developer, maybe you can find your way through the code (after you find the code itself, which isn't exactly a straightforward task), and then you can try to contribute. From the activity I see on the dev lists they don't talk much to each other, but it is enough to get things done. However, there is one not so little problem with this approach: although they say the project is open source, there is no license! And if you aren't a developer, your contribution is limited to promoting the site and using it.
Now, about using the site. They really have new cool features! The interface improved a lot and you have much better control over the search results. You can not only give a rating to each link, but also edit, delete, add, etc. I think this is how it should have been from day one. There is even a feature in which you can change the background of a search result to add a relevant image. On the other hand, the index doesn't seem to be that much better. It's still missing lots of sites and sometimes the same site appears more than once. So I'm not sure it is really ready to be used in large scale yet, what is important to get the critical mass needed to make collaborative production work.
Wikia Search started as four organizational principles: Transparency, Community, Privacy, and Quality. But the real work is done by a small group, cathedral style, ignoring feedback that comes from the outside. So far the search quality isn't high and even the privacy isn't that much respected considering anyone can see the IP of the people who edited anything on the search results.
But it is slowly getting better. Let's see what comes next. Maybe they will even figure out how people can actually play a larger role in it.
Just pinging this list to make sure everyone here is seeing the new
launch today... lots of new features, new and much better index, etc.
Getting favorable reviews so far...
My first question is how the other people who wrote reviews knew about it already. Did they find out by themselves as soon as the got up in the morning, or were they told about it? And in this second case, they were told by whom? Anyway, the real question is how come the list of people who are really interested in the project never knows what is going on in it?!
Is Jimmy Wales following Jason Calacanis strategy and thinking the main advantage of having a community is doing viral marketing of their product? But hey, the people subscribed to the Wikia Search list knew that the new features of Mahalo were being launched before it was publicly announced! So looks like Calacanis is doing a much better job at that. To be fair, Jeremie Miller told us they planned to launch the new features sometime (almost one month ago), when John McCormac asked if there was any updates. Now let's compare that with the almost daily updates that I received when I was in the Mahalo list.
So, basically, this is how it works: if you are a developer, maybe you can find your way through the code (after you find the code itself, which isn't exactly a straightforward task), and then you can try to contribute. From the activity I see on the dev lists they don't talk much to each other, but it is enough to get things done. However, there is one not so little problem with this approach: although they say the project is open source, there is no license! And if you aren't a developer, your contribution is limited to promoting the site and using it.
Now, about using the site. They really have new cool features! The interface improved a lot and you have much better control over the search results. You can not only give a rating to each link, but also edit, delete, add, etc. I think this is how it should have been from day one. There is even a feature in which you can change the background of a search result to add a relevant image. On the other hand, the index doesn't seem to be that much better. It's still missing lots of sites and sometimes the same site appears more than once. So I'm not sure it is really ready to be used in large scale yet, what is important to get the critical mass needed to make collaborative production work.
Wikia Search started as four organizational principles: Transparency, Community, Privacy, and Quality. But the real work is done by a small group, cathedral style, ignoring feedback that comes from the outside. So far the search quality isn't high and even the privacy isn't that much respected considering anyone can see the IP of the people who edited anything on the search results.
But it is slowly getting better. Let's see what comes next. Maybe they will even figure out how people can actually play a larger role in it.
2008-05-27
Wiki Books
Once Alexander Pope wrote
But I'd like to share with you some books written collaboratively that are worth reading:
- The Wealth of the Networks
- Code 2.0
- Wikinomics Playbook
Forever reading, never to be readThat is what is happening here. Lately I've been too busy reading so I didn't have time to write on this blog.
But I'd like to share with you some books written collaboratively that are worth reading:
- The Wealth of the Networks
- Code 2.0
- Wikinomics Playbook
2008-05-08
Wikia Search Hype Cycle
The graphic above is known as the Gartner Hype Cycle. Wikia Search had its peak of inflated expectation on January 7th, and this is where I think it is now.
Are you going to help take it to the next level?
2008-05-07
Meta-info
I'll use this post to explain about the title: Iridescence is an optical phenomenon in which the color seen in a surface changes according to the angle of view. This means that when more than one person is looking at the same thing from different perspectives they can be seeing different colors. This is how the stuff I plan to write about here works: different people see each project by their own point of view and contribute to it in a holistic way, creating a whole that is more than the sum of its parts. The name Iridescence is based on the Iris character from the Greek mythology, a messenger who travelled around bringing news to the mortals, and as such can represent the communication power of the Internet.
Update: change of plans, this blog isn't moving anywhere for now.
Update: change of plans, this blog isn't moving anywhere for now.
2008-04-26
Navigating the Ocean of Information - Past, Present and Future
In 1945, Vannevar Bush, considered the grandfather of hypertext, was already concerned about the information explosion in which we live today. In his essay As We May Think he wrote: "The summation of human experience is being expanded at a prodigious rate, and the means we use for threading through the consequent maze to the momentarily important item is the same as was used in the days of square-rigged ships."
Since then, with the emergence of the Internet and the evolution of the search engines, great progress has been made in this area. In the beginning, when the available information was very limited, a simple pattern matching to search for words was enough to return to the users the relevant documents. Because the number of documents was small, the user was able to rapidly inspect the results and decide if he had found the information he was looking for or not.
As more information was added to the world wide web, more advanced techniques to rank search results were needed, since it was no longer viable for the user to analyze all documents that contained the keyword. Many heuristics are used to try to determine the relevance of a document in relation to a keyword, such as its presence in the title of the page, the number of times it appears divided by the total number of words, the distance between the searched terms in a document, etc. In the same manner, information external to the document is also used to determine its relevance, such as anchor texts of other sites that link to it.
Ranking algorithms that take better advantage of external information tend to present better results. With the increasing volume of information available, companies desire better visibility in search engines results, a fact which gave birth to the area called Search Engine Optimization. The information contained inside a document is easily manipulated, therefore it is easy for a company to create a site with poor quality content that appears among the first results of specific queries. On the other hand, changing the relative importance that other people attribute to a site is more difficult. This led search engines to start using algorithms that give more weight to pages linked to by other important pages, the most famous algorithm being PageRank developed by the Google founders. In this case, the relevance of a link is determined through a collaborative production, in which all sites in the search engine database participate, lowering the impact that local optimization techniques could have in the global results, bringing better results to all of the users of the search engine.
Yet, specialists in Search Engine Optimization develop techniques to try to cheat even the more advanced algorithms by, for instance, buying several domains that link to each other, or paying sites with high visibility to include links to their sites. It is a constant war between search engines and spammers, the former trying to improve their algorithms and increase their computation power, while the latter study new ways to increase the visibility of their sites.
However, a new model to determine what is interesting in the Internet has been gaining more space in the last years. Instead of leaving the work to determine the relevance of a document solely to an algorithm that makes a superficial analysis of the whole Internet production, communities of people interested in the subject adopt this task, constantly providing and updating data about the quality of the documents. Many people believe this will be the future of the search engines. One such example is Wikia.
But how will people and algorithms interact, in some sort of Human-based computation, to achieve the best results? This question is far from a definitive answer, but rests upon the many people interested in working together for a solution.
Since then, with the emergence of the Internet and the evolution of the search engines, great progress has been made in this area. In the beginning, when the available information was very limited, a simple pattern matching to search for words was enough to return to the users the relevant documents. Because the number of documents was small, the user was able to rapidly inspect the results and decide if he had found the information he was looking for or not.
As more information was added to the world wide web, more advanced techniques to rank search results were needed, since it was no longer viable for the user to analyze all documents that contained the keyword. Many heuristics are used to try to determine the relevance of a document in relation to a keyword, such as its presence in the title of the page, the number of times it appears divided by the total number of words, the distance between the searched terms in a document, etc. In the same manner, information external to the document is also used to determine its relevance, such as anchor texts of other sites that link to it.
Ranking algorithms that take better advantage of external information tend to present better results. With the increasing volume of information available, companies desire better visibility in search engines results, a fact which gave birth to the area called Search Engine Optimization. The information contained inside a document is easily manipulated, therefore it is easy for a company to create a site with poor quality content that appears among the first results of specific queries. On the other hand, changing the relative importance that other people attribute to a site is more difficult. This led search engines to start using algorithms that give more weight to pages linked to by other important pages, the most famous algorithm being PageRank developed by the Google founders. In this case, the relevance of a link is determined through a collaborative production, in which all sites in the search engine database participate, lowering the impact that local optimization techniques could have in the global results, bringing better results to all of the users of the search engine.
Yet, specialists in Search Engine Optimization develop techniques to try to cheat even the more advanced algorithms by, for instance, buying several domains that link to each other, or paying sites with high visibility to include links to their sites. It is a constant war between search engines and spammers, the former trying to improve their algorithms and increase their computation power, while the latter study new ways to increase the visibility of their sites.
However, a new model to determine what is interesting in the Internet has been gaining more space in the last years. Instead of leaving the work to determine the relevance of a document solely to an algorithm that makes a superficial analysis of the whole Internet production, communities of people interested in the subject adopt this task, constantly providing and updating data about the quality of the documents. Many people believe this will be the future of the search engines. One such example is Wikia.
But how will people and algorithms interact, in some sort of Human-based computation, to achieve the best results? This question is far from a definitive answer, but rests upon the many people interested in working together for a solution.
2008-04-14
Permissive licenses and the restrictions placed upon them
Permissive licenses like the BSD and MIT impose few restrictions on the use and redistribution of the software. Created in an academic environment, they are based on the principles of publishing and reusing ideas with as much freedom as possible.
The absence of stronger conditions for the distribution of software under these licenses implies limitations for its use with other licenses that work with the copyleft principle, such as the GNU GPL, which demands that any derived work that is to be distributed must be under the same terms of the original license, as it says in section 2 of GPLv2:
Still, this restriction applies only to uses that are protected under the copyright law. As the GPL says, you don't have to accept the license, but nothing else grants you permission to do what otherwise would be prohibited by law. The main exclusive rights under the copyright law are to copy, to distribute and to create derivative works. There are other restrictions that are applied through software patents, but these aren't valid in many countries (including mine) so I won't discuss it any further.
Therefore, it is worth noting that the simplest form of use of software is free under any circumstances. This means that using an operating system or an integrated development environment licensed user the GPL when you are developing your software doesn't force you to license it under GPL terms.
Other uses, such as distribution, contributions, derivative works and use through linkage, may be subject to specific conditions. Most of the licenses aren't very precise about the meaning of each of these terms, and so the involved parties are left with an interpretation problem. Even considering that the Free Software Foundation explains in other documents what was their intention when they wrote the license, these clarifications don't have any legal value unless we are considering a piece of software owned by them. As a result, because of the ambiguities in the text of the licenses, it is often not clear what would be the result in case of litigation.
Nevertheless, it's valid to try to understand the position of the FSF, to serve as a parameter in the discussion about licensing a software under a permissive license when it is related to another software that uses the GPL or LGPL . Next, let's consider a few cases:
To conclude, we should also note that the BSD license itself isn't perfectly clear about what the licensee is allowed to do either, since the terms "redistribution and use" and "with or without modification" can't be mapped directly to the exclusive rights mentioned under the copyright laws. It is implicit that all the rights are being given, but other licenses, such as the MIT license, are more explicit about such things.
The absence of stronger conditions for the distribution of software under these licenses implies limitations for its use with other licenses that work with the copyleft principle, such as the GNU GPL, which demands that any derived work that is to be distributed must be under the same terms of the original license, as it says in section 2 of GPLv2:
You must cause any work that you distribute or publish, that in whole or in part contains or is derived from the Program or any part thereof, to be licensed as a whole at no charge to all third parties under the terms of this License
Still, this restriction applies only to uses that are protected under the copyright law. As the GPL says, you don't have to accept the license, but nothing else grants you permission to do what otherwise would be prohibited by law. The main exclusive rights under the copyright law are to copy, to distribute and to create derivative works. There are other restrictions that are applied through software patents, but these aren't valid in many countries (including mine) so I won't discuss it any further.
Therefore, it is worth noting that the simplest form of use of software is free under any circumstances. This means that using an operating system or an integrated development environment licensed user the GPL when you are developing your software doesn't force you to license it under GPL terms.
Other uses, such as distribution, contributions, derivative works and use through linkage, may be subject to specific conditions. Most of the licenses aren't very precise about the meaning of each of these terms, and so the involved parties are left with an interpretation problem. Even considering that the Free Software Foundation explains in other documents what was their intention when they wrote the license, these clarifications don't have any legal value unless we are considering a piece of software owned by them. As a result, because of the ambiguities in the text of the licenses, it is often not clear what would be the result in case of litigation.
Nevertheless, it's valid to try to understand the position of the FSF, to serve as a parameter in the discussion about licensing a software under a permissive license when it is related to another software that uses the GPL or LGPL . Next, let's consider a few cases:
- Modification of the source code to be distributed as a derivative work or to be returned as a contribution to the original work: it must be released under a compatible license. In the case of LGPL, the new work must be distributed as a library;
- Creation of a new work that uses a library licensed under the GPL: there is high controversy around this use, because some people consider that when a program uses a library a collective work is being created, composed by the program plus the library, instead of saying that the program is a derivative work of the library. On the other hand, according to the FSF, in this case the "viral clause" must be applied, because the program as it is actually run includes the library (see FAQ). However, the GPL says that if there are identifiable sections of that work that are not derived from the program protected under the license, and if the sections can be reasonably considered independent, then they are not required to be licensed under the GPL when distributed as separate works. In fact, many developers try to escape from the GPL by distributing their code without the required libraries. But this may not work because one could argue that the code isn't really independent from the GPL licensed work. Moreover, if the work is distributed as part of a whole based on the work licensed under the GPL, then the distribution of the whole must be on the terms of the GPL, whose permissions for other licensees extend to the entire whole, and thus to each and every part of it;
- Creation of a new work that calls functions from a software licensed under the GPL, but that doesn't need it to be compiled or to run: there is even more controversy in this case, but in general it is subject to the same restrictions, that is, if we consider the more conservative approach, it is necessary to license the work under the GPL terms;
- Creation of a new work that uses a library licensed under the LGPL: in this case, since the LGPL is a variation from the GPL used precisely to allow the use of libraries by software that doesn't comply with the GPL, it is allowed to license the work under other terms, such as the BSD license.
To conclude, we should also note that the BSD license itself isn't perfectly clear about what the licensee is allowed to do either, since the terms "redistribution and use" and "with or without modification" can't be mapped directly to the exclusive rights mentioned under the copyright laws. It is implicit that all the rights are being given, but other licenses, such as the MIT license, are more explicit about such things.
Labels:
BSD,
free software,
GPL,
licenses,
MIT,
open source
2008-04-09
Subscribe to:
Posts (Atom)