Archive

Archive for August, 2010

Microsoft should use its own Contacts

August 30, 2010 Leave a comment

Something which I have noticed after having been exposed to many Microsoft products is that a lot of them don’t make use of other Microsoft products. I have a saying "Microsoft is an 18 wheel company". Microsoft will reinvent the wheel 18 times. I think a primary source for many of the glaring examples are because of all of the company acquisitions which Microsoft has made over the years. As a result the same problem was solved by two completely independent companies and now Microsoft has acquired both of them, but it’s never deemed to be worth the effort to unify the implementation of solving the problem.

 

One of these areas is Contacts. How many solutions for Contacts does Microsoft need to come up with? Windows came up for a solution for Contacts years ago, it was called Windows Address Book, and now it’s the Contacts folder. Outlook has its own contacts, Windows Phone has its contacts, Communicator has contacts, Messenger has contacts, Hotmail has contacts, SharePoint has contacts, Business Contact Manager has contacts. Are these unified? No. The only product that makes use of another products contacts is the Windows Live Mail client uses Windows Contacts. So the contacts in Windows Live Mail is separate than the contacts in Windows Live Messenger.

 

Would it really be that bad if Outlook, Communicator, and Messenger all used the Windows Contacts? Probably not; the API would probably start to get used by third party products if it did get used. Why doesn’t Outlook do that? Well, it would take effort, and the current solution works for Outlook. Microsoft executives keep on saying “Three screens and a cloud”, but I haven’t seen anything that shows that it’s being enforced in the different Microsoft products.

 

I think that Microsoft would be miles ahead in the mobile space if they pushed the Windows Address Book years ago. Syncing with it should have been a must have for all cell phones. Microsoft should have bent over backwards to get phone companies to have a port which would plug into a Windows computer and Windows would have automatically synced the contacts, with the Windows Address Book. Then, Microsoft should have advertised managing your cell phone contacts in Windows as a super easy thing to do; easier than managing them on your cell phone. A TV commercial would go like this: “I hate losing my cell phone and having to enter in all of my contacts again”, “Why do you do that? Every time I get a new phone I sync it to my Windows PC and all of my contacts are on my phone”, “Hey, I have a Windows PC I should start doing that”. If you look in your Contacts directory is there anything there? No. Why? Because you have no motivation to use Windows Contacts.

 

All of these different products allow for contacts to be imported and exported through csv files. So freakin’ what? Instead of creating solutions for importing and exporting, Microsoft should be more focused on syncing. If I’m using more than one product for my contacts, I don’t want to import/export my contacts between the two products. I want to sync them. I’ll just naturally drift from one product to another without having to worry about which one I actually entered the contact information in. For this to happen though, would probably take Steve Ballmer to mandate it. Tell Outlook to get rid of storing contacts, and store the contacts in Windows contacts. They can keep the Outlook interface, but they can’t have separate storage. Do the same for Office Communicator and Windows Live Messenger. Also, for the web interfaces (OWA, Hotmail, SharePoint, etc) instead of having a pronounced import option, encourage a syncing option which allows people to sync with the PC that they are current on. Doing this would be a wonderful first step in having consumers see a unification of Microsoft products through all three screens and the cloud.

 

There is a small light coming up in this tunnel. Windows Phone 7 didn’t feel like reinventing the wheel. They let Windows Live (Hotmail/Messenger) manage the contacts, and the phone will just sync over the cloud. At least one product has the right idea.

Don’t discover the WAN if your modem is still a router

August 17, 2010 Leave a comment
Last night I single handedly broke the internet. For my condo anyway. Let me tell you what happened, so that you do not do the same.
It all started four years ago. I had just moved to Redmond Washington and had ordered Verizon DSL. A package showed up which had a small 3x3x1 black box and couple of phone line filters. The box was my modem. I plugged my landline phone into the filter and the filter into the phone jack. I plugged the modem into an unfiltered phone jack. Then with an ethernet cable I plugged my Windows XP tower computer into the modem. all worked well.
A few months later I won a laptop at work. Now I desired a wireless way to connect to the internet. I purchased a low end Netgear wireless router, and connected the modem to the input jack for the router and everything work fine. I didn’t need to make any configuration changes.
Skip a head to present day and I have a new router, more computers, and am living in a different apartment. Amanda is trying to copy pictures to an SD card for her digital frame, and some of the pictures are on the different computers. No problem though, right; all of the computers are part of the same Windows 7 HomeGroup. Looking at the connections from the different computers at different times and the different computers could see or not see other computers randomly. While I was just looking at configurations, the different computers would be popping in and out of eachothers network maps. One would believe that once a computer dropped off of the network map that it would no longer be able to connect to the internet, but no, all of the computers were able to connect to the internet the whole time. Logging into the Belkin routers management page I was hoping to see if I could resolve the issue. But most of the time the only connection that the router showed was the Wii. As far as it knew, it didn’t have any other clients. It’s possible the Hulu movie I was streaming at the time had overwhelmed the router, but either way I logged a complaint with Belkin and let the matter go for the night.
Last night when I came home from work I would curious to see if perhaps looking at the modem, it might show me something useful. So I log into the modems webpage, and see that the connection status is currently Up. It also has a link to check for updates. Oh, maybe that might help. But before I have it check for updates I see that one of the options is to discover the WAN. Perhaps that can build a proper network map of my home computers. I click on that link, it errors out and I can’t connect to the Internet anymore. The status page says that the PPP connection is down. Everytime I click on the button to have it connect, it errors out. Awe crap.
At this point the computers can see the router, they can see the modem, but the modem can’t see the internet. I dig through some paperwork, find a phone number for Verizon, call it, get forwarded to Fronteir and talk with tech support. Tech support informs me that I was never supposed to be able to connect to the webpage of the modem. Since I have a seperate router, I shouldn’t be using the modem as a router, but only as a modem. So tech support walks me through turning the modem into pass through mode, instead of the modem + router mode that it has been operating in for the last four years. I didn’t want to do this, but I did want to connect to the internet.
So now I can connect to the internet, but I can’t log onto my modem anymore. My only guess for setting it back to being a modem + router, is to use the reset button and reset it to the factory defaults. But I don’t know what that would screw up. I wish I could log into my modem though, an update might help it’s performance.
The moral of the story is to not have your modem try and discover the WAN.

The way Net Neutrality should be

August 10, 2010 Leave a comment

There’s been a bunch of talk on the news since the ruling a few months ago that the Federal Communications Commission had over stepped their bounds in enforcing restrictions on the internet. Basically the judge said that Congress hasn’t granted the FCC the power to create and enforce certain rules and regulations. Now there’s a scramble in legislation to craft powers over what the FCC can and can’t do in regulating Internet Service Providers.

On one side there are companies who would like the ability to have their traffic on the internet be prioritized and are willing to pay for it. It would probably start with intra company communication, but would probably end up with companies paying to have their customer facing websites loading faster than their competitors websites.

On another side of the debate is the ISP’s who are saying that a few users are stressing out their networks with file sharing and they want to throttle Peer to Peer file sharing.

On the other end are the internet purists who don’t want the ISP’s investigating the packets which are currently going across their servers and networks. Part of this group are people who want to continue on using the internet for illegal activities and don’t want their identity compromised to the authorities.  Some of these activities are bad, and some are good. Say for example the illegal activity is spreading accurate information about the benefits of democracy and free markets.

Here is my thinking on the matter.

ISP’s major complaint is that they want to throttle users because file sharing is stressing their system. To do this, they want to inspect the packets and see what type of data is being transferred in the packets. I don’t think that this is the best solution to that problem. What they should do is charge heavy users more. Right now the competition between ISP’s is based on the speed of the internet connection provided to customers. There is no mention of how much volume their customers get. So at the moment it’s unlimited. If there really are users using too much bandwidth, charge them for it. Rewrite the contracts to say that the connection will have a certain speed and a monthly allowance of megabytes. It’s just like cell phone minutes. ISP’s can then have offerings of X megabytes free a month and fines for exceeding the cap. They could even have off peak and on peak hours.

In this model the government (which is hopefully representing the people) can enforce that ISP’s (and hence the government) aren’t inspecting the packets going across their servers. All they’ll get to do is charge for how much the users are using their service. It’s in the best interest of us, as citizens, to be able to keep our anonymity. And it’s good practice to charge for usage.

As for those who want to make certain types of content faster than other content, well too bad. They’ll have to invest in making the internet as a whole faster. It’s worth it to keep all traffic moving across the pipes the same speed. For one thing, it reduces the overhead of moving the packets if each node in the network doesn’t need to inspect every packet and deprioritize most of the packets coming across.

So, my proposal naturally creates a model of ISP’s charging every time a packet leaves their network. This could create the problem of ISP’s starting to charge for content coming into their network. If ISP’s did that, they would then pass the cost of interacting with other networks onto the consumer. You (as a consumer) could end up with a monthly bill where accessing some websites would cost more than other websites based off of the networks that exist between you and the websites servers. In addition to this, the ISP’s would create and run tools and processes for tracking all of this. Creating and running these would cost resources, which would of course be passed onto customers as part of the cost of doing business. So for the good of everyone, doing such, should be illegal. It would border on packet investigating, even if it wouldn’t be investigating the entire contents of the packet.

ISP’s are in a position where they would be tempted to charge servers for connecting to the internet. I find this odd, because it’s the servers which create demand for the ISP’s product. ISP’s could cut off the supply of content in attempt to save a few pennies. Should this start to happen, I don’t think that the internet would go away, but it could create a situation where the cost of entry is so high that no new companies enter the world wide web. Again, something else which is bad for all of us. One of the major benefits of the internet is the ability for a startup to grow as fast as they can. The low bar of entry has been a blessing for us all.

Proponents of those arguing for a less neutral internet (than the one we have today) say that Spam makes up so much of the internet, that if ISP’s would start investigating packets, that they could throttle the Spam down and it would free up resources for all of us. Well, the email has to start as a client somewhere. If the clients start getting charged for the amount of megabytes they’re pushing the price of Spam would increase, causing the volume to Spam to go down.

Government should regulate that ISP’s are not allowed to inspect packets, they are not allowed for charging other networks for the volume of data transferred between servers, but they should be allowed to charge for the volume of data being pushed to clients. This would address everyone’s concerns (except for those who are using a lot of bandwidth right now). As US citizens we don’t want the Postal Service inspecting our packages, and similarly we should demand that ISP’s not inspect our packets.

Indexed Cached Collections

August 6, 2010 Leave a comment

The IEnumberable<T> type in .Net is awesome. It allows for lazy initializing of data. This can be good, this can be bad. This is really good when you don’t enumerate over the entire collection. This is bad when you enumerate over the collection more than once. Andrew Arnott wrote up a caching enumerator, which is great for most cases; Caching results of .NET IEnumerable<T> generator methods. I’m currently in a situation where I want to cache the results, but the code doesn’t need to enumerate over the collection, it needs to index into the collection. So I wrote up these two classes to solve the problem.

namespace CachedCollection
{
    /// <summary>
    /// Access data in an indexed collection where the source of the data is lazy evaluated
    /// </summary>
    /// <typeparam name="T"></typeparam>
    public class IndexableCachedEnumeration<T>
    {
        private List<T> listCache;
        private IEnumerator<T> stream;

        /// <summary>
        /// Constructor
        /// </summary>
        /// <param name="source">Source of the data</param>
        public IndexableCachedEnumeration(IEnumerable<T> source)
        {
            stream = source.GetEnumerator();
            listCache = new List<T>();
        }

        /// <summary>
        /// Get an item
        /// </summary>
        /// <param name="i">The index</param>
        /// <returns>Item at the index</returns>
        public T this[int i]
        {
            get
            {
                if (i < listCache.Count)
                {
                    return listCache[i];
                }
                else
                {
                    while (stream.MoveNext())
                    {
                        listCache.Add(stream.Current);
                        if (i < listCache.Count)
                        {
                            return stream.Current;
                        }
                    }
                    throw new IndexOutOfRangeException();
                }
            }
        }
    }

    /// <summary>
    /// Class to cache lazy evaluated objects from indexable sources
    /// </summary>
    /// <typeparam name="T"></typeparam>
    public class IndexableCachedConvertedCollection<S, T> where S : class where T : class
    {
        private Func<S, T> conversion;
        private IList<S> listSource;
        private S[] arraySource;
        private T[] arrayCache;

        /// <summary>
        /// Constructor for an IList source
        /// </summary>
        /// <param name="source">IList of data</param>
        /// <param name="conversion">The function which converts an item in the source to the desired type</param>
        public IndexableCachedConvertedCollection(IList<S> source, Func<S, T> conversion)
        {
            this.conversion = conversion;
            listSource = source;
            arraySource = null;
            arrayCache = new T[listSource.Count];
        }

        /// <summary>
        /// Constructor for an array source
        /// </summary>
        /// <param name="source">Array of data</param>
        /// <param name="conversion">The function which converts an item in the source to the desired type</param>
        public IndexableCachedConvertedCollection(S[] source, Func<S, T> conversion)
        {
            this.conversion = conversion;
            arraySource = source;
            listSource = null;
            arrayCache = new T[arraySource.Length];
        }

        /// <summary>
        /// Get an item
        /// </summary>
        /// <param name="i">The index</param>
        /// <returns>Item at the index</returns>
        public T this[int i]
        {
            get
            {
                if (null == arrayCache[i])
                {
                    S item = null == listSource ? arraySource[i] : listSource[i];
                    if (null != item)
                    {
                        arrayCache[i] = conversion.Invoke(item);
                    }
                }
                return arrayCache[i];
            }
        }
    }
}