An Unsuccessful Evernote to Google Docs Conversion Project

I’ve been using Evernote for around 20 years. It’s OK, but the product seems to be flailing these days. Plus, I was forced to become a paying user. That, in itself, is no sin but it motivated me to investigate alternatives.

Meanwhile, I’ve been reflecting on all the gigabytes of Google Drive storage space available to me. Since I’ve been looking for a programming project to exercise my new skills in the Go programming language, all of a sudden I had the idea of trying to come up with a way to use Go to move all my Evernote notes to Google Docs.

This note describes my efforts to use Go to migrate my 480 Evernote notes into Google Docs. Although I will eventually write some code, there are various things to think about before starting. I’ll include my thought process as the project progresses.

How To Export My Evernote Notes

This is an important question. Even though I have only 480 notes, manually copying them to Google Docs is out of the question. Plus, what kind of programming project would that be? I should say that the vast majority of my notes only contain plain text. I don’t make much use of Evernote’s more advanced storage features, which is another reason why I’m thinking of migrating away.

I know that both Evernote and Google Docs have APIs that I might be able to use for this project. But, after some reflection, I realized that using them might be overkill, at least for the export step. If Evernote includes a way of exporting my notes into a single file in a usable format, I might be able to then use this file as input to the Google Docs import step.

I did some research and found that currently the only way to export from Evernote is by saving the notes in ENEX format, which is XML based. Older versions of Evernote allowed exporting in HTML format, but this is no longer supported. That’s probably just as well because I suspect the XML format will be easier to deal with. Actually doing the export turned out to be trivial, and I now have a file called EverNote.enex, which is 11MB in size.

One interesting thing is that every day or so an Evernote popup appears telling me about new Evernote features to expect. One of the new features is Additional export options. I wonder if such options will help this project.

In any case, I now have EverNote.enex on my Fedora Linux system. (I should mention that I run the Evernote desktop client on Windows. However, for this project, I’m doing all the conversion work on Linux, mainly because my Linux machine is much more powerful than my Windows machine.)

How Will Evernote Notes be Organized in Google Docs?

Google Docs, which is built on top of Google Drive, allows the creation of a directory hierarchy. So, I’m thinking that each Evernote tag will be represented by a sub-directory. One thing I haven’t figured out is what to do with notes that have multiple labels. If Google Drive supported symbolic links then this would be easy, but I don’t know if it does. [Update] I did some searching and it looks like it does, but I’m not going to spend much time on this now.

Working With EverNote.enex

Evernote has a document called How Evernote’s XML Export Format Works at It says it was written back in 2013, which isn’t a surprise due to the fact that it mentions HTML as an output format which isn’t supported anymore. But, it’s a start. It mentions that the ENEX file is in a format defined by version 3 of the Evernote Export doctype declaration defined in .

I now have an XML file, but I don’t really know what to do with it. I’ve never worked with XML before. I’m thinking that as a first step I could write something that would show me the tags assigned to each note. That would accomplish two things

  1. Parse the XML file.
  2. See if any of my notes have more than one tag assigned to them, which is an issue I mentioned above.

A shortcut to accomplishing these goals is to use Firefox to do the initial parsing. Indeed, if I tell Firefox to open the XML file, it shows me a tree-structured representation of the XML. From that it shouldn’t be hard to look at the <tag> lines to find notes with multiple tags. [Update] I used Vim to remove all non-<tag> lines from EverNote.enex. It turns out that doing this didn’t help because although I now see a whole bunch of tags, I can’t see which note they belong to. This means I can’t see if any single note has more than one tag. I’ll have to figure out a more intelligent scheme for this. So, I used a regular expression that deleted all lines that didn’t have “note>” or “tag>”. This still left a bunch of lines with XML elements I didn’t want to see, so I removed these lines too. The end result showed me that I do indeed have multiple notes with multiple tags. This isn’t a bad thing – it just means I’ll have to be careful to do the right thing with the multiple tags.

Querying EverNote.enex

But this got me to thinking that it would be better if there were some kind of query language I could apply to the parsed XML representation so that I could see arbitrary collections of items. I have to confess that at this stage of the project I have no idea how to do this. But, this seems like any project dealing with XML would have similar needs so I suspect that such a program already exists.

Flash forward a couple of days, and I’ve learned about XPath, which Wikipedia says “is a query language for selecting nodes from an XML document”. This sounds like exactly what I’m looking for. I’ve also learned about XQuery, which is an advanced XPath. There are web sites for both in which you can copy and paste some XML, and then execute queries. This might be fine for some simple queries, but after I’m done playing around, I’m going to need a local program to do this because the web sites only permit a limited amount of XML. I did some research and I found xmlstarlet, which is a simple command-line XQuery processor.

Now that I have some experience manipulating EverNote.enex it’s time to look at it in more detail. I’m especially interested in deciding which elements I can ignore, and which I’ll have to put in Google Docs.

Here’s a schema for the elements in EverNote.enex that I care about:

<title> </title>
<tag> </tag> [<tag> </tag>] …
<content> </content>
<data> </data>

I remembered that I had heard of the Go xml.Unmarshal() function. This parses legal XML and puts the results into a struct. I started looking around for examples of how to use xml.Unmarshal(). Although I found several good examples, none of them really described how to construct the struct to contain the results. So I blindly tried modifying the example at Amazingly, after fixing a few dumb mistakes, the result worked! It found 480 notes, which is correct.

Here’s the code:

package main

import (

type Notes struct {
        XMLName xml.Name `xml:”en-export”`
        Notes   []Note   `xml:”note”`

type Note struct {
        XMLName xml.Name `xml:”note”`
        Title   string   `xml:”title”`
        Tag     string   `xml:”tag”`
        Content string   `xml:”content”`
        Data    string   `xml:”data”`

func main() {
        var notes Notes

        xmlFile, err := os.Open(“EverNote.enex”)
        if err != nil {

        byteValue, err := ioutil.ReadAll(xmlFile)
        if err != nil {

        xml.Unmarshal(byteValue, &notes)

        fmt.Printf(“Found %d notes\n”, len(notes.Notes))
        for i := 0; i < len(notes.Notes); i++ {
                fmt.Printf(“\n\nNote %d\n”, i)
                fmt.Printf(“Title: %.40s\n”, notes.Notes[i].Title)
                fmt.Printf(“Tag: %.40s\n”, notes.Notes[i].Tag)
                fmt.Printf(“Content: %.40s\n”, notes.Notes[i].Content)
                fmt.Printf(“Data: %.40s\n”, notes.Notes[i].Data)


In order to make the output semi-readable I only print the first 40 characters from each element.

Naming Google Doc Files

As I said above, I had been thinking of creating one Google Doc file for each Evernote note. One of the first problems with such an approach is deciding what to name each file. I had naively thought that I could use each Evernote note’s title as the file name. Now that I can see the XML representation of each note clearly, I no longer think this is such a good idea. This is because the default title for a note is the first line of the note. In some cases this would work fine, assuming I don’t mind spaces in filenames. For example, I might have files named Setting Up Fedora or tmux notes. But what about # diff mod_status.c mod_status.c.orig or A 32Kb (2^13) RAM chip organized as 8K X 4 is composed of 8192 units, each with? Short of manually modifying all 480 notes to have a useful title, I’m not sure what to do right now.

Formatting Commands in Notes

I noticed another problem. Even a simple looking note can contain a lot of formatting commands. The best way to explain what I’m talking about is a short example.

Somehow the following note:

# diff mod_status.c mod_status.c.orig
 <         ap_rputs(“<script src=\”sorttable.js\”></script>\n”, r);
 <             ap_rputs(“\n\n<table class=\”sortable\” border=\”0\”><tr>”
 >             ap_rputs(“\n\n<table border=\”0\”><tr>”

appears in the exported XML file as (partially edited)


<![CDATA[<?xml version=”1.0″ encoding=”UTF-8″ standalone=”no”?>

<!DOCTYPE en-note SYSTEM “”&gt;

<en-note style=”word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;”>

]# diff mod_status.c mod_status.c.orig<br/>


&lt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; ap_rputs(&quot;&lt;script src=\&quot;sorttable.js\&quot;&gt;&lt;/script&gt;\n&quot;, r);<br/>


&lt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; ap_rputs(&quot;\n\n&lt;table class=\&quot;sortable\&quot; border=\&quot;0\&quot;&gt;&lt;tr&gt;&quot;<br/>


&gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; ap_rputs(&quot;\n\n&lt;table border=\&quot;0\&quot;&gt;&lt;tr&gt;&quot;<br/><div><br/></div><div><br/></div><div>tar rvf httpd-2.2.3.tar.gz&nbsp; $a/mod_status.c</div></en-note>      ]]>


This is very discouraging because I wouldn’t want all those formatting commands to appear in a Google Doc file. But, I’m not sure I could reliably strip them out. In retrospect this isn’t surprising because the only thing that Evernote says can be done with an exported file is to import it back into Evernote. Maybe using an XML export is not the way to handle this problem. I think that there’s also an API into Evernote, which I’m going to look into next.

[Update 1] I just found which describes the format actually used to store notes. It’s just as I guessed above. This leaves me in a bad state. Even if I were to use the Evernote API I’d still end up with the same extraneous formatting commands. I don’t know how to overcome this so I’m going to stop writing. If I figure out how to solve it I’ll continue this note.

[Update 2] I tried several of the opensource equivalents of Evernote but none of them appealed to me. So, I bit the bullet and manually copied all my Evernote notes into Microsoft OneNote. It’s not perfect but it’s good enough. Plus, I save enough from not having to pay for Evernote to mostly pay for an Office365 subscription.


A Human ‘Make’ Program (almost)

I started with V6 Unix at UC Santa Barbara in 1977. I remember that when V7 came out, I learned about the ‘make’ program and started using it with great success to help efficiently build a large Fortran package for signal processing.

For its size, there was a lot of computing going on in Santa Barbara at that time. It was one of the first 4 Arpanet nodes, and there were a bunch of companies making networking products and doing speech research as a result.

I was a student at UC Santa Barbara but I started toying with the idea of finding a real job, mostly to make more money. I found several possibilities and went to interview at one.

This place had an a need for somebody to, in essence, be a human ‘make’ program. The computer they used, some kind of Data General, was so slow that they couldn’t do a build more that once or twice a day. So, in an attempt to speed up the build, they wanted to hire somebody who would, by hand, keep track of the last modification date of all the components in the package they sold, and do a build that only performed the necessary steps to generate the package – in other words a human ‘make’ program. Apparently they figured that this would save enough time to justify the $24K salary they were willing to pay. $24K in 1978 wasn’t a bad salary at all.

I didn’t take the job, but I’ve often thought that what I should have done would have been to take the job under the condition that I could mostly work remotely. Then, I could have used the ‘make’ program on our V7 Unix system to generate the optimal script to build the package, and then taken the script back to the company to run on the Data General computer. I figure this would have taken maybe an hour a day. The rest of the time I could have spent on the beach thinking about ways to spend that $24K.

A Tragedy of Greatness

In the early 1980s I was the computer system manager at the Institute of Theoretical Physics (ITP) at the University of California Santa Barbara. ITP is the kind of place where people from all over the world come to work on hard scientific problems. As you can imagine, a place like this could tumble into chaos if it weren’t directed by someone who had the respect of the participants.

Fortunately, in 1980 ITP was able to recruit the Nobel prize winner Bob Schrieffer to be its Director. Bob won his Nobel prize in 1972, along with Bardeen and Cooper, for developing the BCS theory of superconductivity. Bob was perfect for the job. In addition to his obvious technical skills, he was very good at getting along with all kinds of people. This was a surprise to me because I had a preconceived notion that Nobel prize winners were stuffy and aloof.

I started interacting with Bob closely for an interesting and unexpected reason. Bob had done the research in the 1950s and 1960s that resulted in his Nobel prize. This was before computers were commonly used in theoretical physics. To his credit, Bob realized that he was seriously uninformed about modern computing, especially about the kinds of computers that the researchers at ITP were starting to use. So, on several occasions he asked me to have lunch with him so that I could bring him up to date. Needless to say, Bob was very perceptive and asked excellent questions. To this day I’m still amazed that a Nobel prize winner asked me to teach him something. Also, even though I was, and still am, a nobody, Bob invited me to several parties at his house. To be honest, these weren’t small intimate gatherings, but still it was quite an honor for me to be included.

I left Santa Barbara in 1985. Bob also left, in 1992, to become the chief scientist at the National High Magnetic Field Laboratory at Florida State University in Tallahassee. Although I had no further contact with him, I still checked what he was doing from time to time.

I was therefore completely shocked to learn that on September 24, 2004 Bob was in an automobile accident that killed one person and injured seven others. He had nine prior speeding tickets and was driving with a suspended license. It was said that Bob had fallen asleep at the wheel of his car. On November 6, 2005, he was sentenced to two years in prison for vehicular manslaughter. This was a tragedy in many ways. In addition to the obvious effects on the people involved in the accident, the fact that Bob had been so nice to me made it especially tragic.

Costco vs. Zenni Optical – A Tale of Two Glasses

Recently an unusual coincidence happened to me when I finally got my act together to get a new pair of glasses. The glasses I had been wearing were starting to fall apart, but that was my own fault for mistreating them. I believe I had bought them from Zenni Optical ~3 years ago. (I could be wrong about where and when I got them but that doesn’t matter because I’m 100% satisfied with them.)

I have vision insurance that covers eye examinations and new glasses. I hadn’t used it in a couple of years so now was the time to see exactly what it covered. I took my prescription to Costco, and selected a frame that cost $60 and lenses that cost $80 for a total of $140. I was told that the insurance would pay $63, leaving $77 for me to pay.

When I got home, just for yuks, I double checked the purchase summary from Costco. Guess what – the optician who took my order had made a mistake. He entered 66 instead of 86 in the Axis column for one of the lenses. I called him and he said that I was right and that he’d fix it. He also said that their “auditors” would probably have caught this mistake (remember this). Fine. We all make mistakes, so no harm done. The glasses came in and they were perfect.

As part of this experiment, I decided to try Zenni Optical again to see how their cheap frames and online ordering experience compared to Costco’s. I went to their web site and picked out a $6.95 frame and $37 lenses for a total of $48.80, including shipping. I tried to get the same options for the lenses as I got at Costco but I’m not sure if they’re exactly the same so I couldn’t make an exact cost comparison. Still, it’s interesting to note that the total cost of the Zenni glasses was well under the amount that my insurance would pay.

The Zenni glasses arrived in about a week, and I immediately tried them on. To my surprise, one of the lenses was completely the wrong prescription. Otherwise, the glasses seemed fine. I checked the order and, sure enough, this time I was the one who entered an incorrect number. I had entered 2.50 instead of .250 in Sphere column for the lens. This was entirely my fault, but I’m surprised that Zenni doesn’t do some kind of auditing like Costco claims to do, either when values are entered on the web page, or later on before lenses are made. I don’t know anything about optometry but I would think that it would be possible to flag certain values as unlikely. This shouldn’t result in a canceled order, but a warning message could appear on the web page, or an email message requesting confirmation could be sent.

I contacted Zenni to see what my options were. To my surprise, they agreed to a one-time store credit for the amount I had paid, minus shipping, even though the problem was 100% my fault. This is excellent customer service! I reordered the glasses, being extremely careful to enter my prescription correctly. The glasses arrived several weeks later, and they’re perfect too!

The lessons of this story are 1) check your lens order to make sure nobody made a mistake, and 2) definitely consider Zenni Optical because their price and customer service are excellent.

The Forrest Conjecture

(I originally wrote this in 2003, and posted it to the comp.arch USENET group then. It generated a fair number of insightful comments which I’ve attempted to incorporate into this new version).

You might not know it, but the programs that run on your computer are actually divided into 2 pieces. One piece is where the instructions that your computer executes are stored. These instructions are things like “ADD” or “JUMP”. This is called the “text space”. The other piece is where the data that the program accesses is stored. This is called the “data space”.

32-bit processors in PCs started to appear in about 1985. A 32-bit processor can address a 4GB text and a 4GB data space. At the time, a 32-bit processor was a huge improvement over the 16-bit processors that came before, which could only address 64KB, at least not without painful tricks. 32-bit processors made it possible to run much larger programs that could process much more data than before. All was well until applications started appearing that needed to access more than 4GB of data. To solve this, AMD, and then Intel, released 64-bit processors. (Because of the way processors are designed, the next increment from 32-bits is 64-bits).

Today, 64-bit processors are ubiquitous, and everyone has enough address space to do what they need. However, I claim that 64-bit processors are being pushed in one way that’s completely unnecessary. This is that although it’s crystal clear that a 64-bit data space is critical, there’s no need at all for a 64-bit text space. A 32-bit text space would be fine even today, roughly 30 years after 32-bit processors first appeared. The reason for this is simple – it’s simply too complicated for a human, or a group of humans, to write a program that comes close to filling up a 32-bit text space. Unless humans get much smarter, this isn’t likely to change.

To prove this, I measured the total text size of every single executable and library on a large Ubuntu 16.10 server system. This size was slightly under 2GB. This means if every program and library on this system were somehow combined into one giant program, it would still fit in a 32-bit text space.

Notice that I’m talking about a program written by a human. Obviously you could write a program that itself generates a program of any size. I’m told that Computer Aided Design programs generate huge amounts of text space. I’m also talking about one program running on one computer. I suppose it’s possible to design a processor in which different parts of the text space are actually running on remote processors, but I haven’t seen one. I’m also not talking about programs run by an interpreter. A classical interpreter treats programs as consisting entirely of data, and, as I mention above, the need for a 64-bit data space is clear.

To be clear, I’m not seriously suggesting that somebody make a processor with a 64-bit data space and a 32-bit text space. After all, a 64-bit text space might be unnecessary but it doesn’t do any harm.

Does anybody know of any programs that have require than 32-bits of text?

The First Bangles Fan?

I went to college at UC Santa Barbara in the 70s. In the late 70s I had a girlfriend named Pam. She told me she had a couple of sisters who were in a garage band back in Los Angeles. I’m from Los Angeles too, so I knew that this was nothing unusual. The fact that her sisters were still in high school was interesting but again, this was nothing terribly surprising since everybody in Los Angeles was in a band or trying to get into show business.

One weekend Pam and I drove down to L.A. in her VW bug to meet her family. While I was there I did meet her sisters. If I remember right, I heard them play a little bit but I didn’t think they were anything special. Some time after this, maybe 6 months later, Pam was graduating from UCSB and her parents threw her a big party. Her sisters’ band played at the party. They were better than when I heard them the first time, but they were still nothing special, or so I thought.

You can guess where this is going. Pam’s last name was Peterson, and her sisters were Vicki and Debbie Peterson. So, I saw the Bangles long before they actually became the Bangles. It’s been great fun watching their success.

Today I learned that Pam died last March. This was very shocking. I hadn’t had any contact with her in years until 2010, when I posted a message to a Bangles board that the Bangles read. I asked for them to pass my email address on to Pam, which they did. I got an email message from her on 6/28/2010. She remembered me and asked me to drop her a line, which I did. Sadly, I never heard back from her in spite of my making several attempts. I don’t know when she got sick, but maybe this is why she never responded.

Anyway, when I read about people’s memories of the Bangles in the old days I always think back to that graduation party and think “I saw them first”.

The Forrest Curve

[Note: I originally wrote this in the early 1990s so some of the references are dated. Nevertheless, the point I’m trying to make is more true now than ever.]

The Forrest Curve
Jon Forrest

There is a phenomenon sweeping the computer industry that is having a profound but largely unrecognized effect. I claim that companies ignoring this phenomenon will suffer a slow and painful death. What’s more, there’s absolutely nothing that can be done to escape it. In this article I first describe this phenomenon and then spend some time trying to figure out what it means.

Simply and briefly stated, my hypothesis is that fewer and fewer computer users think their computer is too slow. I’ve invented what I call the Forrest Curve to illustrate this.

Here’s the Forrest Curve:

                   | \
                   |  \
    "slow"         |   \ /\
    factor         |    v  \
                   |        \ /\
                   |         v  \
                   |             \   /\
                   |               v   \
                   |                    \

                   -- time --

This is a curve with a general downward slope, having occasional upward blips. The curve approaches but never hits 0. Neither axis is drawn to any scale nor can be used to derive any specific numeric values.

The “slow” factor is the number of people who think their computer is too slow. I admit that this isn’t a very objective measurement but you can get a feel for it by the amount of grumbling about computer speed that takes place in your office.

I’m also not being very specific about exactly what constitutes a “computer”. I claim it really doesn’t matter. Taken as a whole, the whole pile of stuff sitting on someone’s desk (or lap), is what I consider to be a computer. A more detailed examination wouldn’t change the Forrest Curve.

Note that I’m including the entire population of computer users in this graph, many of whom know very little, if anything, about what their computer is really doing or how it works. But, even if I were to confine this graph to “software professionals” the graph would merely have a higher origin point. The general shape wouldn’t change.

I also recognize that there is a class of users that can and will always be able to consume any amount of computer resources. These guys are why the Forrest Curve never goes to zero. In spite of their needs they can’t reshape the Forrest Curve because they don’t have enough money to spend anymore.

Every so often something comes along that causes a temporary perturbation in the Forrest Curve. Some examples might be relational databases, X-windows, Windows NT, multimedia, handwriting and speech recognition, and so on. This is natural. There will always be such cases and they admittedly can cause high blips in the Forrest Curve. Sometimes these blips are partially flattened by special purpose hardware but the problem is that special purpose hardware usually has a short lifespan and is doomed to financial failure due to lack of economy of scale. The rest of the time general purpose hardware will catch up. The one exception I can see here is that the hardware necessary to handle digital video is special purpose now but will soon be a commodity, once consumer television goes digital.

Another implication of the Forrest Curve that we’re already seeing is the shrinking, if not outright elimination, of the distinction between a workstation and a PC. A while ago you could think of a workstation as a kind of special hardware gizmo that was only bought for a select few. The rest of us got PCs. But now, with 450Mhz Pentium IIs, the PCI bus, fibre channel disks, and all the rest, it’s gotten to point where the main difference between a workstation and a PC is the size of the monitor, the lack of IRQs in workstations, and maybe the amount of memory you can stick in a PC.

The Forrest Curve implies that the folk myth claiming that people’s requirements for computing power expands to consume all available computer cycles is no longer true. I’m not convinced that it ever was true, although I have more faith in its corollary about disk space. Meanwhile, although Moore’s Law, which states that the power of microprocessors doubles every 18 months, seemingly operates independently, the Forrest Curve does predict that Moore’s Law will start to spread out as the cost of producing ever faster microprocessors rises.

Another factor I recognize is that having an infinitely fast computer on your desk doesn’t do you any good unless it runs the applications you need to run. I’m choosing to ignore this issue.

Let’s assume you accept the Forrest Curve. What does it mean?

It means that computer vendors are going to have a tougher and tougher time selling computers. This is because people above the curve only need a new computer when something breaks. This happens less and less often. Even most disk drives, which are about the most mechanical part of a computer system, come with at least a 3 year warranty.

It means that computer purchasing decisions are no longer made based on price/performance or just performance, like in the dark ages. Now, when somebody decides to buy a new computer it will be price alone, or maybe price and service, that determines which computer to buy. The service aspect should not be ignored. Some people consider it very important to know that they can call somebody to come to their home or office to fix stuff and are willing to pay a fair amount for this. Other people feel that it’s important to buy name brands no matter what the quality of the name brand is. For those of us who are more enlightened, if our application requires 25 MIPs to run, and we’re trying to decide whether to buy the 30 MIP machine or the 50 MIP machine to run it, the number of people who’d pay much extra for the 50 MIP machine is very small. Let’s face it, nobody is going to turn down a faster computer but the people making the purchasing decisions will have a harder and harder time justifying the extra cost of a faster system for most people. This is especially true in environments where large numbers of computers are bought for non-technical people.

In earlier versions of this document I had the following sentence right here: “It means that companies like DEC and SGI that are trying to produce the fastest computer are slowly committing suicide because there will be fewer and fewer people who need to buy computers this fast.” Two points for me. As of this writing, DEC is gone and SGI is fighting for its life. On the other hand, companies like Sun, and virtually all PC companies, are doing the right thing by concentrating on staying just below the Forrest Curve by selling computers that are fast enough at the lowest price. Although Sun’s approach might have been an accident it has kept them profitable during some extremely hard times in the industry. Ironically, it may turn out that Sun will start to suffer too unless it can sell SparcStations at PC prices or increase their performance to rise above the Forrest Curve for a little while.

So, except for breakage, to be successful the computer industry is going to have to concentrate on selling to people who don’t currently have a computer. How many people is that in modern society? Maybe the laptop industry will thrive because it isn’t affected by the fact that so many people already have computers since most people who buy laptops already have at least one computer.

Maybe computer vendors can postpone hitting the Forrest Curve by concentrating their marketing and sales efforts into the Second and Third World but I bet the Forrest Curve still applies there, but with a lower origin point. Plus, I wonder how much money there is to be earned there, given hard currency and other non-technical problems. But, even if these places are exploited, the Forrest Curve is merely spread out a little. There’s simply no way of escaping it.

Another way to explain the Forrest Curve is as just the commoditization of computer technology. Assuming your application runs on a certain computer architecture, there’s little any vendor can do to add enough value to get you to buy their system instead of somebody else’s. For all intents and purposes, the different brands of computers are all the same, just like different brands of flour and sugar, and buying a computer will be similar to buying baskets in Tijuana. The only way for computer vendors to survive is to remember this, and to remember that price and service will be what makes or breaks them. Caveat Vendor!

[Update 9/2022]

I originally wrote this article back in the early 1990s. It’s now Sept. 2022, roughly 30 years later. Is the Forrest Curve still valid?

I think it is. The huge I/O speedups caused by solid state disks, RAM price reductions, CPU speed increases, and multi-core CPUs have together contributed to making desktop PCs more powerful than most people need, just as the Forrest Curve originally predicted.

However, the Forrest Curve does have a major fault. It originally described only desktop computing. It didn’t anticipate mobile devices, such as cell phones and tablets, which didn’t exist then. These devices connect to the Internet at a slower speed than desktop PCs and are severely limited in how much power they can use. But, the Forrest Curve, when applied to mobile devices, still applies. How often do you hear people complaining about the cell phone being too slow? If you do, chances are that the complaint is actually about how fast the phone connects to the Internet, not the speed of the phone itself.

Another development that the Forrest Curve didn’t anticipate is using graphics processors for general purpose computing. The number of people who do this is small, but the processors make it possible to do things significantly faster than before. Crypto-currencies, like Bitcoin, wouldn’t have been feasible without them. Plus, computer gaming is dependent on them.

Copyright 2022 Jon Forrest.
All rights reserved.

This document may be published in any forum for any reason provided the document is not modified in any way.

Last updated (9/18/2022).

Big Musical Fun

I have to wonder if the reason why so many of those pop music stars self-destruct is simply because they’re not having any fun. In my travels through YouTube I’ve come across a bunch of music being played with what looks and sounds like big fun. The musical quality is all over the place, but that doesn’t really matter. It’s all very real, and honest.

Here are some examples of what I’m talking about:

Virtually anything this guy and his friends do is worth watching.

This isn’t an easy song to do, for anyone.

This must have been so much fun! That tenor sax player is good enough to join the real band.

This is more of an acquired taste, but just imagine playing this.

These guys are all pros who’ve done it 1,000,000 times but, for some reason, it looks like they’re really having fun this time.

I can’t imagine how he put all this together.

You can’t dance to this, but they did a fantastic job.

These guys don’t wear fancy clothes or make funny faces, but just imagine putting this together.

This is a group of high school kids. They’re not perfect but they’ve got their own way of adding value.

This style of music might not be your cup of tea, but just watch how much fun they’re having.

These guys and girl are far from rock stars, and I doubt they’re getting paid. But watch how much fun they’re having.

I didn’t know that Rockabilly was big in Latvia, but these guys and girl are doing just fine, and having what looks like a great time.

So what if they didn’t have a big audience? They’re all fantastic, especially the guitar player + singer guy. Imagine this much talent at an unknown BBQ joint.

Imagine recording this on a fine sunny summer afternoon with a whole bunch of your friends.

Watch how these two enjoy each other’s singing.

This is the ultimate garage band, plus they’ve got Boris Johnson from the UK on drums. Check out their extensive videos on YouTube – you’ll be amazed.

The True Story Behind The “No Bozos” Logo

Yes, the rumors are true. I invented the (in)famous “No Bozos” logo you see above. Over the years, people have asked me for the story behind how this all came about. So, I thought I’d write it all down here, once and for all.

If you were around in the 1960s, you might have heard (of) the Firesign Theater. They were a group of performers who recorded several albums of stream of consciousness-like ramblings and rants. One of their albums was called “I Think We’re All Bozos On This Bus”. Very funny stuff. For some reason, this title stuck in my mind.

In the 1970s, I made several trips to Europe. One of the first things I noticed was the preponderance of signs containing an image surrounded by a circle with a line through it. The idea was that such things were trying to explain that whatever the image was, it wasn’t allowed. Cars, cigarettes, and swimsuit tops often appeared in these images. For some reason, these images also stuck in my head. At some point, I connected the “Bozos” saying with the image I had seen in Europe, and the “No Bozos” logo was born. Unfortunately, since I can’t draw, it was held prisoner in my brain.

In fact, it was still in my brain in the early 1980s when a fortuitous sequence of events happened. I had a friend, Ed, who worked at a place where a graphics person, Kristi, also worked. One day I mentioned the “No Bozos” idea to Ed, who thought it was pretty darn clever. He, in turn, mentioned it to Kristi, who also recognized its brilliance. Unlike me, Kristi was a good artist – so good that she was able to quickly draw the world’s first “No Bozos” logo. Then, another person where Ed worked, Howard, saw the “No Bozos” logo and instantly recognized its tremendous commercial potential. Howard was the kind of guy who liked to put together and promote companies so he contacted me and proposed that we try to make some money from the logo. How could I say no?

The main obstacle was getting clearance from the holder of the “Bozo” clown image, who was Larry Harmon.  Howard did all the negotiating and managed to work something out. As I remember, Larry Harmon got more out of the deal than Howard, Kristi, and I did put together. Howard and Kristi found a printing company to produce a “No Bozos” sticker in several sizes and shapes. I’ll always remember when I got my first box of stickers. I was at UC Santa Barbara at the time and I did a good job handing them out. Later there was even a “No Bozos” Hall in one of the dormitories on campus. (I had nothing to do with creating it, and I was very surprised when I saw it).

As a good promoter, Howard was able to generate lots of publicity for “No Bozos”. There was a time when I was interviewed regularly by radio stations and newspapers. It seemed like each one asked the same question, which was “What’s a Bozo?”. The highlight was a big mention in Playboy Magazine.

Again, I don’t remember all the details, but the stickers sold fairly well. Being young, lazy, and naive, I didn’t want to wait very long to start seeing my share of the money. So, I made Howard and Kristie a deal – in exchange for an amount of money I no longer recall, I’d give up my share in the partnership. They agreed and paid me a lump sum. I used this money to buy a hot tub. I was planning on getting the “No Bozos” logo silk screened in the hot tub but I never did. I should have.

That’s most of the story. After a while the “No Bozos” logo lost its popularity. Maybe this was because a whole bunch of other logos with similar designs started appearing. I don’t know. I do know that “No Bozos” had a resurgence when Eddie Van Halen and Steve Wozniak were photographed wearing “No Bozos” tee shirts.

As far as I know, any “No Bozos” products you see now are bootlegs. I’ve seen stickers for sale in various places but they aren’t the official ones. (There’s an easy way to recognize official stickers but I’m going to keep that a secret for now). I don’t really mind seeing them. In fact, in makes me feel good. About 10 years after starting the company with Howard and Kristi, I contacted Howard to see if he had any interest in re-releasing them but he was on to bigger and better things. I haven’t talked to either of them in probably 30 years. Plus, now that Larry Harmon is dead, I don’t know what would be required to do it again.

Every now and then somebody asks me about the story behind “No Bozos”. Now you know.

How To Speak Internet

I don’t like to listen to people talking about the Internet, but not for the usual reasons. What bothers me is that it all starts to sound the same. “blog”, “www”,  … over and over again, like a big echo chamber. So, I’ve chosen not to pronounce these words like everybody else. For me, “blog” is “b log”, “www” is “wa wa wa”, and so on. You get the idea. I’m sure there are other, maybe better, examples.  So, like they say in Rap concerts, “let’s make some noise out there” but let’s do it a little differently.