Categories
career

Work Is Not a Place – Part 2

A few principles of the Second Law of Thermodynamics serve to prove the irrefutable superiority of remote work for knowledge workers:

1. natural processes are irreversible

2. concentrated energy is more efficient

3. entropy reduces efficiency

The human body is basically a system with a finite amount of energy that is tasked with accomplishing work. In the case of knowledge workers, electrochemical energy (brain power) is burned to produce actual work in exchange for money.

As with any system, energy in the human body varies greatly with the environment in which it operates, and this includes the mental framework. Efficiency can reach dizzying heights in a remote-work environment, but it’s not automatic. It requires a disciplined mind and a renewed dedication to actual work, from both manager and employee.

What is the finite amount of energy at our disposal? The eight-hour workday might be a good baseline for manual labor, but for creative work, five hours of peak productivity per day is around the most we can expect from anyone (assuming chemical intake is caffeine and not something stronger like Adderall). This is supported by research, my practical experience, and general common sense. The remainder of the requisite eight-hour day can be spent doing tasks that are less brain-intensive, like research, revision, and the inevitable administrative work.

Remote work allows for maximum efficiency, like some zero-loss, theoretical energy source recovered from an alien spacecraft marooned on Earth.

By contrast the physical office environment disperses the concentration of energy and limits efficiency to somewhere between zero and thirty percent, which just happens to be the range of efficiency at which an internal combustion engine converts old-school petrochemical fuel into forward momentum. In this work arrangement energy is wasted on non-work activities in the effort to get to (and exist!) in the place where we’re paid to think.

1. Natural processes are irreversible.

Business objectives must comply with the laws of physics and human nature, not the other way around. In an average work week the employer owns a set amount of the employee’s energy, and the employee agrees to devote that amount of time to the working process. All the effort devoted to work that does not involve actual work gets decremented from the total in terms of output no matter what, regardless of the quality of employee. When the limit is reached, productivity takes a dive. On the outside the employee may appear to be working and this might give the manager a warm feeling; but warm feelings do not contribute to the bottom line. On the inside her bullshit meter is beyond the threshold and she’s turned off. Employee energy is a currency, and it’s the employer’s responsibility to spend it wisely. Spending the energy wisely results in success for the organization, and this starts with providing the right environment for its employees.

2. Concentrated energy is more efficient.

How is the knowledge worker’s finite energy dispersed by the physical presence requirement of the traditional office?

First, there’s THE COMMUTE. For me the actual work is not so bad. Generally speaking I like what I do, and I’d do it even if I had all the time and or money in the world. What bothers me are the thousands of little actions that comprise the stupid routine to get my meat mass to the work site, exist there for nine plus hours, and then get back from there, when there is no logical reason for me to be there in the first place. Sure, I usually make the best use of my commute time, but I’d rather be focusing on actual work, fulfilling my actual obligations to my employer, and then getting on with life.

True, it is my responsibility to commute from home to the work site, but as noted above, it doesn’t matter. Natural processes cannot be reversed.

Most people have it much worse. I devote the absolute minimum time on wardrobe and hygiene, and then I take it easy on the train, reading a book or planning my day. Most U.S. workers have to deal with the hellish nightmare of rush hour traffic in a motor vehicle, which luckily I have not experienced in twenty-plus years. How productive can an employee be if he just spent the past hour screaming his intent to murder the mothers of people who cut him off in traffic? Recently I calculated that one of my friends had driven around the world over six times in the past ten years, in terms of mileage driven back and forth to work. Somehow, to him this was not just normal, but commendable. To me it’s insane, a wooden board cracked over the head.

The commute has a massive impact on the environment and long-term health issues. Of course there is zero energy loss with teleworking. The commute is zero. Food prep is zero. Clothing prep is zero. All these things can be done during breaks in the work day. Impact on the environment: zero. Let’s stop this harmful, masochistic routine once and for all.

3. Entropy reduces efficiency.

Then there are all the continuous, idiotic distractions that only an office can produce. OH THE HUMANITY. People invade your space and force you to listen to their stupid jokes, gossip, opinions about world events. There’s always some guy in the next cubicle over who just won’t shut the hell up. Some days the senseless meetings are non-stop. For me, tendency to produce actual work is much greater when working remotely, as there has to be some measureable impact of my work. In the office I tend to slack, simply because being seen and heard is perceived (wrongly) as equal value to actual work.

The importance of written communication often goes unrecognized in the physical office. It’s not unusual to spend time crafting a clear description or explanation in an email reply, only to have the recipient run to your cubicle for an offline follow-up, leaving others out. Email is asynchronous for a reason. There’s real value in taking time for deliberate thought. It’s also not unusual to verbalize the same information to many different people in different ways in addition to writing about it. This is because interpersonal, verbal communication just feels better, especially to those people whose job only job is to appear important. Even if people have no idea what you’re talking about, they smile and nod because they feel better hearing your words, and they feel better knowing that you’re working in a professional environment with them, wearing similar “business casual” clothing, looking polished, professional and smart. None of this has anything to do with the actual product or service that we’re paid to provide, but we’re all dancing the same stupid dance, and in this way the inter-subjective experience becomes real. Communication in an office setting typically requires triple the energy than is actually necessary to convey thoughts and ideas.

In the very best case scenario THE PHYSICAL OFFICE environment is not as good as what you have at home. Does anyone have a cubicle at home? The answer is “no,” unless you’re hopelessly locked into the wage slave mentality. People are not meant to sit in boxes. A bad physical environment drains energy and dramatically decrements the bullshit tolerance, as well as other human needs like comfort, fun, and hygiene. There are very real studies that show how bad lighting and cold air, for example, reduce employee productivity.

The “existing there” part is currently the most challenging for me, as it’s the extreme opposite of what I used to have. By contrast, in my old teleworking job I used to proactively invent improvement projects and find work to do when things were slow. I don’t do this anymore. All I’m thinking about all day is getting the hell away from that place.

In a typical week I drop about thirty hours on the above three items (the commute, the humanity, and the physical office). This might seem like a lot, but when you factor in all the water-cooler chats, the distractions, pointless meetings, meat-mass transport, navigating distance to nearest vacant toilet seat, or availability of suitable places to sit down and eat lunch, the time adds up. So I’ve burned about three-quarters of the energy I’ve agreed to devote to the job, and this is before any actual was has been done. If there’s something urgent to do then I’ll do it, but my finite energy level and my business sense are telling me that I’ve only got a total of ten hours of productivity left to give.

At the end of the week in my old teleworking job, I had accomplished a solid thirty or forty hours of actual work, which, if people are absolutely honest, is a heck of a lot of brain work in one week for any job. My bullshit meter almost never got pegged, as energy spent was at or around the forty-hour limit. I was happy, engaged, and consistently returned to the company very high efficiency on hours logged.

As an added bonus to the organization I never called in sick in over ten years of working from home. Why not? Because I never had to serve my time in the virus distribution center called “The Office”. I still worked even if I was sick, and nobody suffered because of it. I never felt like I had to take long breaks from work. So the end result was win-win.

Remote work offers employer and employee the most efficient expenditure of human energy and the highest productivity in terms of actual work produced. Never mind the opportunity to eliminate the massive cost of maintaining the bizarre circus of bullshit called the physical office.

In the next post I’ll wrap up a few mind-tingling thoughts.

Categories
career

Work Is Not a Place – Part 1

A quarter of a century ago, a new, beneficial tech promised to boost knowledge worker productivity and save employers loads of cash. Virtual Private Networking (VPN) allowed employees to work remotely, cutting back on carbon emissions and the senseless waste of time getting to and from work. But for many employers, their management styles and conservative business practices did not mesh with the new tech. It took a global pandemic to make those old-school laggards see the light.

If you’re able to work anywhere and you’re serious about getting the work done, then working remotely is the only way to go. It’s a win-win for the employer and employee, no matter how you look at it. I can attest to this truth as an eleven-year veteran of remote work who transitioned to an organization with a strict physical presence requirement. The new job offered a few nice perks, but the change was like stepping into the past, back to a time when it was common to confuse “being there” with doing actual work. Or, put another way, it was like jumping into an Idiocracy future, where stupidity reigned supreme.

Teleworking is the most productive work environment because it offers the unique opportunity to maximize work and minimize the ceremonial bullshit that does not matter. Many people who have always worked in an office don’t understand this idea because stupid office customs have been so deeply ingrained and accepted as a normal part of work. People commonly label their bullshit as “work” and brag about how much of it they do. Mandatory physical presence requirements for brain workers are nothing short of an assault on the intellect (and possibly on physical well-being, in the current environment of COVID-19).

It’s as if some hypothetical person who knew nothing about our world was introduced to the concept of work for the very first time, and on his first day in the office somebody walked up to him and cracked a two-by-four over his head. His boss would say, “Oh yeah, you might want to wear a helmet tomorrow,” and every day for the rest of his career the guy gets hit in the head with a board, accepting it as completely normal.

There are only two reasons an employer could enforce physical presence during a time like this. One, the employer is sadistic, immoral, and cruel to their employees and society as a whole. And-or, the job in question is fake. The fake job phenomenon exists more often than we might like to admit. There are whole industries that are fake. I’ll touch on this later. For now, the point is that people should get paid for actual work – not for getting hit in the head with a stupid lumber stick.

Before a discussion of teleworking can even begin, it’s necessary to re-examine the most basic concepts of work, because its true definition has become blurred in the modern age. First there is the concept of a little something we can call “actual work,” the measurable service or product an employee produces in exchange for getting money.

The second basic concept is “energy,” the finite life force that employees are capable or willing to devote to actual work in a given period of time. After the energy is spent, the individual might show up, but they’re a disabled meat sack, existing but not producing. Energy levels are very real factors in economic output. The Ford Motor Company was the first to standardize the forty-hour work week – not for humane reasons, as is often cited, but because they determined this to be the economic sweet spot to get the most out of their manual laborers. Countless studies have supported this number, and even lower numbers for those doing strictly brain work. Any effort after these limits produces short-term diminishing returns, and long-term negative returns. Employees take time off for sick leave – either because they really are sick, or because their bullshit meter is pegged in the red.

Working remotely eliminates the wasteful, customary bullshit of the traditional office, freeing up time for real work (and more importantly, life). It’s unfortunate it took a global pandemic to remind some organizations of a single, blatant fact. For many of us, “work” is not a place.

Categories
vertical farming

Mirai Vertical Farm

Last Saturday we packed the kids into the minivan and navigated the highways, bridges, and tunnels of the most populated metro area on earth, destined for Chiba, home of Tokyo Disney, but we wouldn’t be visiting the Magical Kingdom again today. Our destination was an indoor vertical farm called MIRAI. The only problem was that I still wasn’t sure why.

A few weeks earlier I had contacted Spread, near Kyoto, operator of the largest indoor farm in Japan, but they had turned down my request for a tour. By some miraculous coincidence my wife knew the CEO of Mirai from her university days, a gentleman named Nozawa Nagateru. I was surprised and elated when he agreed to give me (us) a personal tour of their Chiba farm. I would’ve preferred to go alone; but I needed my wife there for introduction and translation, and with her came the kids.

The drive from our place on the Shonan coast was an hour and a half. My eyes were burning with seasonal allergies. Spring was the season of suffering for many people in Japan.

As we crossed the Yokohama Bay Bridge, I caught sight of the cruise ship that had been in the news. There were some people with COVID-19 “Corona” virus on board.

Mirai grew produce in clean room environments, where workers wore full body suits and breathing gear, sanitized in an airlock before going in. It was better quality control than whatever they were doing to contain the virus on the cruise ship. A day or two in that clean room and my allergic reactions would also subside.

Entering Tokyo, the kids slept and I got the head space to think about what I was going to ask Nozawa-san. How would represent myself to him? Why was I visiting an indoor farm? I wasn’t a journalist. I wasn’t a horticulturalist. I didn’t know anything about growing plants. Never mind the fact that I didn’t have a business card, the obligatory prop of any professional engagement in Japan. At least we were bringing souvenirs from our hometown of Kamakura, and a paperback copy of my book. I had worked in information technology for most of my life, and I was the author of an unknown novel that happened to feature a character who abandoned his corporate gig for a life of helping people and growing things indoors. If anything I was an otaku (geek). There were otaku obsessed with everything from Pokémon to bullet trains, so why not vertical farms? Either way, I was going on a whim and a dream, armed only with the desire to work with beneficial tech.

The Mirai farm consisted of a couple of tidy, non-descript warehouse buildings tucked into the midst of an industrial zone. A mob of uniformed workers from a nearby factory smoked cigarettes outside the convenient store down the street. If anyone had to pick the most unexpected place to find tens of thousands of vibrant green plants, this was it.

We parked in front of the main building and Nozawa-san met us outside. We exchanged greetings Japanese-style and he ushered us to the side of one building. Inside and upstairs, we entered an observation room overlooking a vast space containing rows of illuminated towers ten meters high, each containing thousands of plants in various stages of growth. Monitors on the wall displayed graphs of data points collected from the farm below, as well as from other facilities, both local and remote. “Mirai” meant “future” in Japanese, and they lived up to their name.

Nozawa-san poured green tea into paper cups and we sat down at a table to talk. His company was one of the early pioneers of indoor farming technology, with over fifteen years in the business. Their unique, data-driven approach to cultivating perfect produce could be described as precision farming, focusing on the balance of over two hundred growing factors. Like all vertical farms, their main selling points were steady, reliable production of perfect produce, at a fair price. Of course their products were free of pesticides and GMO’s. Best of all, their vegetables tasted great. We sampled fresh romaine lettuce and basil from a platter on the table. Each plant was ninety-five percent edible, with minimal stalk to throw away. Their produce was clean at harvest time. No need to wash.

Aside from the Chiba facilities, Mirai owned and operated a lab and another large farm in Tagajō, Miyagi prefecture, the area hit hardest by the Great Tōhoku earthquake of 2011. I asked if this lab was built in response to the doubly-whammy natural and manmade disasters there. Yes! Absolutely, he said. They chose this site to help the area recover, and to demonstrate the benefit of their tech. Sendai wasn’t too affected with fallout from the Fukushima meltdown, but I guessed their clean room facilities would be immune to radioactivity, as well as to pollen, vermin, and disease.

Mirai sold their design and built farms in numerous locations that were hard-pressed with resource issues and-or harsh weather conditions, with successful operations in Antarctica, Northern China, Mongolia, and Russia. Nozawa would soon visit Norway to consult on the construction of a farm there. Mirai was a leader in urban farming, too, having built farms in cities like Tokyo, Shanghai, and Beijing.

Mirai’s yield is up to one hundred times more resource efficient than traditional farming, using just two percent of the water and a fraction of the land. Their primary expenses were electricity (for the LED lights, the pump, cooling, and control systems) labor, and insurance.

Not knowing anything about the business, I asked the most common sense question on my mind: what would it take to make Mirai more cost-efficient than traditional farming? Sure, their product was of the highest quality and healthier on every level, but at what point would it make more economic to grow produce with the Mirai method as opposed to traditional farms? It turned out to be a good question. Mirai’s solution was in the size of plant. By growing bigger plants they could essentially get more bang for the buck. It wasn’t as simple as that, of course. Other operations, like Spread, got more bang for the buck by growing smaller plants and harvesting quicker. It was a complicated equation of precision farming, taking into account numerous factors of resource usage, output, and time.

Next I asked Nozawa-san what he thought of the impending world food crisis, and how Mirai might play a role. He was of course aware of American vertical farm marketing campaigns, but he answered flatly: “starving people don’t want lettuce”.

Excellent point. Poor countries facing population explosions and resource deficits needed protein and grain. The best answer to this problem was the kind of work done by George Kantor, Senior Systems Scientist with Carnegie Mellon University’s Robotics Institute, Farm View project, whose work focused on using technology to accelerate the efficiency of traditional farms. CMU was working on tech that didn’t require radical change; it could be plugged into existing, traditional farming techniques. Even the trillions of data points collected by start-ups like Plenty seemed only applicable to growing plants indoors, so it still wasn’t clear how vertical farming would save the world.

There was only so much to keep our young kids entertained in the observation room. They were getting unruly to the point it was impossible to concentrate, and it was a natural time to go. Nozawa-san also had two boys, too, so he understood. We decided to call it a wrap, taking a couple of photos and presenting our gifts. Nozawa was impressed with my novel even if he couldn’t read it in English. He asked why I wrote. I shrugged and told him it was the kind of work I wanted to do. I think he assumed my reason for the interview was research for my next book. Who knew? Maybe it was. Later it occurred to me that this was our first family outing in nine years dedicated to something I wanted to do. I didn’t stray far from the path between work and home. My visit to Mirai was a real treat. It may have been unclear how I could contribute to the business of indoor farming, but the fact that this was how I chose to spend my first real “me” time in nearly a decade spoke loud and clear. I was drawn to beneficial tech. It was work I wanted to do.

MIRAI Vertical Farm

FarmView: CMU Researchers Working to Increase Crop Yield With Fewer Resources

Categories
tech vertical farming

Vertical Farms

If I could do any kind of work, I’d devote a good part of my week to working at a vertical farm. (Or building one myself.) A couple years ago I wrote a novel called TOKYO GREEN. It’s about a guy who abandons his high-paying job in Silicon Valley to live a more natural, beneficial life. The MC ends up in Tokyo, where he builds an indoor farm for a bunch of retirees, with the help of a rogue AI. At the time of writing the novel I didn’t know much about vertical farming. I still don’t know much about it now. But I do know I want to get involved. I’ve worked twenty-plus years coding and automating systems, and I have a rough understanding of how machine learning works. This was the knowledge I brought to writing the book, and it’s the knowledge I could bring to vertical farming today.

I’m thinking about this now because I got an invitation to visit a vertical farm in Chiba, Japan (near Tokyo Disney) in a couple of weeks, and I want to have some informed questions for them. In particular I’m interested in the computing systems and software they use, as this is the most likely way I can contribute my current skills.

In general it does not seem likely that vertical farming will offer many job opportunities, as the technology that makes these operations economically feasible is also the technology that is eliminating the need for human work. Still, I’m imagining my contribution to beneficial technology starting with the question “what kind of work would I do if I could choose anything?” I’d write, of course, but vertical farming is at the top of the list.

Vertical farming is a sustainable, cost-effective method of Controlled-Environment Agriculture (CEA). It combines tech like aquaponics, hydroponics, robotics, and machine intelligence (analytics) to grow plants indoors. I’ve also seen “urban farming” and “low-impact farming” used to describe this kind of AGTECH, but these are general terms. Vertical farming refers to the particular way of growing plants in tower racks or even sideways, on a wall.

Hydroponics is all about growing plants without soil, in a nutrient-rich solution, and one way to get those juicy nutrients is through the age-old techniques of aquaponics. There is something very appealing about creating a cyclic, sustainable ecosystem of plants and water-dwelling creatures and organisms. For example, the hydroponics system might drain its water into a catfish tank for re-circulation and watering the crops. It reminds me of when I’d siphon fish poop out of my fish tanks and use it to feed the plants. Aquaponics takes it a step further, completing the loop.

How is vertical farming beneficial? It produces fresh, hyper-local, inexpensive greens with superior flavor, using sustainable techniques. The very nature of Controlled-Environment Agriculture means the product is free of pesticides and GMO. What could be more beneficial than that?

These indoor farms are also far more productive than soil-based farming, growing thirty or more crops per season, and they use a fraction of the resources. Some common stats cite up to 100% less water, 100% less land, and 100% less shipping fuel than field-grown produce that’s delivered to stores in far-away places. It would be interesting to know the net carbon footprint of a big vertical farm operation, considering the electricity it sucks off the grid. (In TOKYO GREEN, the MC uses solar panels to run his farm.) Some indoor farms use actual sunlight, further reducing the need for electricity.

Vertical farms are on the rise. I read somewhere that the industry has grown from near nothing to US$3 billion invested in the past three years. Even if those numbers are off, the industry promises to grow even more in the coming years.

The recent ascension of vertical farming is made possible by certain technologies that have become robust and cheap enough to make this method of food production economically feasible. These technologies include perception devices (cameras and sensors used to monitor and measure plant growth), machine intelligence that processes the data produced by the perception tech, and robotics that carry out the instructions of the AI.

According to this article, Carnegie Mellon University (CMU) has developed an integrated system called ACESys (Automation, Culture, Environment) that powers vertical farming, but the only thing I saw on the CMU website about this system was an implementation of it in a vertical farm in Taiwan in an effort backed by University of Illinois. There’s nothing about what the system is on a technical level, or how it works.

CMU is doing a lot of great things with robotics and AI, with a particular focus on AGTECH. Their mission is to avoid a world food crisis by the year 2040. By “world” they mean what used to be referred to as the third world, as that’s where populations will explode. CMU is developing soil-based, high-tech farming that will enable existing farms to produce all the food they need while reducing resource usage by half or more. I’d love to dive deeper into what CMU is doing, but for now I’ll focus on vertical farming. In the next week I’ll take a look at what various vertical farms are doing around the world.

Categories
tech

Beneficial Corporations

I’m tempted to write the title of this post with a question mark, as it’s not always clear that corporations are good. On the macro level, corporations have been a huge boost to elevating human quality of life. Governments can be slow, incompetent and wasteful. Corporations move fast. They’re good at raising capital and achieving big things. They can also do great harm.

I’ve been employed by corporations for the past couple of decades, yet I still don’t consider myself “the corporate type”. Very few people do. It’s the safest way to go if you live in America and have to pay health care for a family of four or more. If solvency is your thing then somebody in the household better be getting that subsidized health insurance as part of a corporate plan. It would be nice if working for a corporation was the safest and the most ethical way to bring home the bacon. I’ve never worked for an organization that benefited the greater good, but in the next five years I’d like this to change.

What qualifies as beneficial? I’ve thought about this before and I’ll think about it again. A search for “beneficial corporations” brings up the usual lists of “best companies to work for,” and the companies deemed to have the most social responsibility. Topping these lists are the likes of Facebook and Google. This should raise an eyebrow. One could argue that Facebook and Google are among the least socially responsible companies of all time, as they maximize advertising effectiveness by unleashing uncontrolled psychological experiments on the entire civilized world.

Tech companies are always cited as being the best to their employees, but they also work their employees to death. If you work for a Silicon Valley company then chances are you’ve kissed that work-life balance goodbye. You might’ve even found yourself crying alone in your cubicle as you miss your daughter’s dance rehearsal … again.

Work-life balance is a term that should not even exist. It implies that work is not good, a contrast to life. In reality work is an integral part of life. But the kinds of work people do these days can be anti-human, unnatural. I know that words like “human” and “natural” can be ambiguous, but if a term like “work-life balance” enters the common vernacular then something’s not right.

Even if work-life balance at a company is excellent, the benefits these companies provide to the world is usually limited to whatever perks they give their employees. Some companies seem to have a guilt complex about this.

In the past decade or so it has become popular for corporations to ask (demand?) that their employees volunteer their time to a charity, to “give back” to the community. This is one way to define social responsibility. However, the term “give back” is suspicious. To “give back” implies something was taken, probably without consent. If a corporation pays taxes and fulfills its legal obligations, isn’t that enough? Why are these reparations to the community necessary? If the organization was doing something beneficial in the first place then all of this mandatory volunteering would not exist.

One peculiarity of our economic system is that really beneficial jobs don’t pay good money. If you hear about someone helping someone else then you’re going to assume they’re not getting paid. Childcare, elderly care, teaching, farming: they’re all essential to maintaining or quality of life, not to mention our survival. These jobs are often thankless, the most difficult and the least lucrative. Some (like stay-at-home mom) don’t pay anything at all. On the flip side, a hedge fund manager, a corporate lawyer, even an engineer working as a cog in the military industrial complex, all of these are worthless (if not harmful) to society, but the money is nice.

What if there was some way to accurately measure and qualify the value of helping people? What if this became the new currency? Why can’t we have a system that incentivizes the pursuit of excellence while also taking care of our own?

I’m still not sure what qualifies as a beneficial corporation. I’m refining the definition as I go. Maybe it’s enough for a company to take care of its own employees and leave it at that. For me, it would be an awesome step in the right direction to help people and get paid. I’m going to keep an eye out for opportunities. My next employment situation will be with an organization that’s doing some good.

Categories
coding tech

Multimedia Archiving

Wouldn’t it be awesome if there was an easy way to catalog and control the ever-rising deluge of photos and videos we generate with our devices, a system of organizing that could be transferred to future family members for safe-keeping? What if this system had the following traits?

  1. locally-controlled (by you)
  2. decentralized (resilient)
  3. platform-independent
  4. with a standardized file structure
  5. and a standardized file naming scheme
  6. that is both effortless
  7. and flexible

Well, that would be awesome, indeed.

It so happens I have such a system. It achieves the first five of the above traits, but it’s not yet effortless (if there is such a thing) or flexible. Without me the system falls into disarray. This post isn’t exactly a life hack – not yet, anyway. It’s the first of what will be several posts on this project as I get closer to making it more flexible and easier for others to use. With this post I wanted to explore the reasons for such a system, and to illustrate the general idea.

Ten years ago my dad gave me a box of photos and slides, which I scanned and integrated into my family’s multimedia mess. This effort began a home-grown archiving system that would come to be known as the Multimedia Archive Project (MAP). For me it’s a workable solution, still evolving today.

My ultimate long-term goal is to establish a system (more a protocol than an actual set of tech) that is easily transferable to my kids and subsequent generations. A decade later I’m in a holding pattern, still looking for technology that could suit my needs. Here’s some more details about what I mean by the above traits and why they’re important to me.

What is “locally-controlled”? I want the primary location of my family’s multimedia files to remain in my hands, so to speak. I’m not contributing to the oceans of photographic knowledge that an artificial intelligence uses to shape the world in ways I don’t see fit. This might seem paranoid now, but the world is starting to understand “free” services for individuals have big costs to society as a whole. It’s very important that I maintain control.

“Decentralized” just means it’s impossible to lose data. This part isn’t exactly effortless. It requires discipline and planning that most people aren’t willing to do. The Multimedia Archiving Project is backed up in the same system I use to back up all the data in my household, which includes consolidation and copies made to mirrored USB drives, a NAS, and a cloud service based in Switzerland that is a stickler for General Data Protection Regulation (GDPR) rules.

“Platform-independent” is another cornerstone of this project. I don’t want to be locked into any single app, or depend on one company’s services. When I started this effort ten years ago the big cloud services were making it difficult to switch platforms. They’re a little nicer now, but to some degree this is still true.

Apple offers a great all-in-one photo archiving solution, and they’ll no doubt be around for decades to come. I’d be the first to recommend Apple to anyone who doesn’t have the discipline or technical chops to handle a DIY solution, based on their track record of quality software (iTunes for Windows not withstanding) and data privacy. Still, I prefer platform-independence. My family and I have some Apple devices, but we don’t have a Mac. What if I put my trust in a company who changes the rules twenty years down the road in a way that is unethical, inconvenient, and-or too expensive for me?

At the opposite end of the spectrum, I’d rather delete everything than trust a mind-control advertising platform like Facebook or Google with my family memories. Last year we went skiing with another family in beautiful Niseko, Japan. The other dad and I were on the mountain together one afternoon and he took a video of me as we skied down the slopes. I thought it would be an awesome video, with the sun in the right position and the spectacular scenery. And it was! The only problem was it had been live-streamed to Facebook and I didn’t have a Facebook account. Never mind me. What if the photographer wanted to preserve this video – or any media – and pass it down to his kids? They’d need Facebook accounts, too. This illustrates the importance of platform-independence. It’s the freedom to never be locked down to a proprietary system that defines how you can use your own stuff.

There are some very positive trends in digital identity that could work in my favor as the decades unfold (see previous post). Bottom line, I want the flexibility of moving my family memories securely and safely, with the maximum privacy levels, whenever I want.

The “standardized file structure” and “file naming scheme” are the coolest features of the system. They’re inspired by ISO standards. This gets into how this system works.

How does this thing work?

First, there are rules, because every system has rules. Fortunately most of the rules are enforced by code, but the first one must be observed by humans: NEVER MODIFY ORIGINALS.

The second rule is there is one and only one destination path for any given source device, a file folder in the ORIGINALS directory. For example, there is a folder for all the originals backed up from my wife’s iPhone, a folder for my camera photos, a folder for our Gopro, and so on. These devices and paths are configured in an XML file. This system runs on Windows, so I use PowerShell. In the future I might go with Linux and Python.

The process begins by running a script to “add new files to archive,” which reads the XML file for source and destination paths, checks to see if the devices in question are plugged into the system, and if so compares the latest photo and video files on the device with what’s already in the archive. If there’s new stuff then it copies it to the destination folder. I run a separate script to rename the files in a standard format so that anyone can take one look at the file name to know the date it was created, by whom, and where (all this data is available in the metadata of standard media files). A five-digit sequence number is tacked to the end of the file name. Ten years ago I never thought I’d have more than 99,999 files per device, but who knows? My wife is approaching 10,000 photos and videos now after five years with one iPhone (and these are the files remaining after she deletes stuff from her phone).

Since rule number one is NEVER MODIFY ORIGINALS (renaming doesn’t count as a modification, as it does not change the “last modified” timestamp), I maintain a separate directory for “COLLECTIONS,” which are basically photo albums of certain events or seasons. This is a manual effort and probably always will be. I don’t have the AI at my disposal to magically identify people, places, and events to assemble a photo album on the fly.

When the files are copied, updated, and renamed, I then kick off the backup script (basically a fancy Robocopy) to replicate the changes to the various backup locations, including the folder to sync with the cloud service.

In the future I might keep this basic system intact but expose a portion of it to a paid AI service to assist with categorizing, facial-recognition, tagging, and the like.

In days of old, family memories might be preserved in the form of hard-copy photographs in a shoe box. Back then, the problem was keeping this single point of failure safe from fires and floods. Now, the problem is we have too much stuff. Some intervention is necessary, and this system works for me. As for “effortless,” I’m not sure I’ll ever completely reach this goal. Maybe the point of an archiving system is that is should require some effort, otherwise how do we decide how we’re represented by future generations?

Categories
iedntity tech

Digital Identity and Better Quality of Life

It’s amazing we’ve gone this long with password protection as the primary way to prove who we are and what belongs to us online. Nobody likes user names and passwords and they’re a hacker’s dream. Dual-factor is more common now but it is proprietary and a short-term fix. Digital identity is much bigger than the convenience of logging into websites: without an expansion of this technology’s capabilities the future of our civil liberties are at stake.

A universal system of digital identity is a crucial piece of establishing meaningful and effective digital rights. To protect our future freedoms, we need an authentication system that is:

  • universal
  • decentralized
  • highly-available
  • sovereign (to the individual)
  • and most of all, secure

So what is the big deal with digital rights? The answer is obvious for corporations; their value is directly tied to their ability to maintain the integrity of their intellectual property, all of which is digital.

For individuals, the data we give away increasingly determines how much liberty we enjoy and what opportunities come our way. What we share online can determine whether we’re approved for a loan, accepted to a school, hired by an employer, or asked out on a date.

Fortunately there have been positive developments in the establishment of digital rights for individuals in recent years (See links to articles at the end of this post.) Digital rights are an issue in the U.S. presidential race for the first time I can remember. It’s about time.

Digital rights should be as clear-cut as property rights, but there’s still a lot of gray area for individuals. Part of the problem is lack of awareness. Another issue is that most of our digital property exists outside our direct control. On a technical level we can say that an online account belongs to us and that the files associated with that account also belong to us. But if the data gets into the wild then how can we take credit or claim it’s ours? With the right approach, digital identity has the potential to protect our virtual property more securely than age-old property rights.

In decades past, if someone stole your precious collection of Credence Clearwater Rival albums on eight-track tapes, you could call the police, but chances are you’re not going to get any “leads” on who stole your old-school tunes. In more recent years, if you purchase a product, say a blender, that allows online registration (and you’re diligent enough to complete the process and hold onto the registration info), then you have at least the potential to claim indisputable ownership of that blender if it’s stolen and then miraculously shows up at a garage sale or an eBay auction. The online registration process is analogous to how Public Key Infrastructure (PKI) encryption works. The manufacture is the Certificate Authority. The serial number stamped into the blender is the private key, and the proof of purchase is the public key, as it’s transferable.

What’s the right approach for data? What about all the bits and bytes that come from our smart home, our refrigerator, and multitude of devices we use? With a universal system of digital identity, would every outgoing packet be digitally signed and tagged in a way that can’t be hacked? (See Oasis Labs, below.) Anyone who has worked in cyber security knows there’s no such thing as one hundred percent secure, but with today’s technology we can get close enough.

In the past couple of years there has been a lot of talk about “self-sovereign identity” and biometrics, secured with block-chain tech. Block-chain ledgers bring the system a little closer to impenetrable by making the certificate authority distributed. A startup called Oasis Labs is working to make this a reality, but they’re in the beginning stages. Microsoft has been backing “self-sovereign digital identity” for a couple of years. This is a technology that would be a huge benefit to individuals interested in protecting their digital rights. No surprise, Facebook is not on board.

Digital identity is a hot issue right now. Like most people I’m no expert on this topic now, but we’d all be well advised to follow it closely in the coming years.

Microsoft and decentralized identity “Microsoft believes everyone has the right to own their digital identity, one that securely and privately stores all personal data. This ID must seamlessly integrate into daily life and give complete control over data access and use.”

Oasis Labs “With Oasis Labs you can use data without liability, easily comply with new regulations, and collaborate on shared data without risking privacy or losing control.”

GoodID looks interesting, but I can’t tell from their website how the technology works or whether it meets the criteria for digital identity mentioned at the beginning of this post.

As for digital rights:

(For an illuminating example how data we give away can affect the course of our life, see “It’s time for a Bill of Data Rights” in the MIT Technology Review.)

Some other recent and noteworthy articles:

Utah Just Became a Leader in Digital PrivacyContract for the Web “A global plan of action to make our online world safe and empowering for everyone”

Categories
data tech

Data is the New Dirt

You may have heard that “data is the new oil”. You might also sense a bit of marketing hoopla in this phrase. There are a few ways to define “big data” but none of them fits with this analogy. My definition comes from my experience in transforming disparate data sets into business intelligence for the purpose of enabling an organization to accomplish its tasks. The data by itself is useless. Data is the medium from which value is extracted, not the value itself. In this view it’s more accurate to say that data is the new dirt.

It is true that data needs to be transformed just like oil needs to be refined, but the key difference is that oil is a finite resource and data is infinite. The scarcity of oil determines its value. Oil in raw form has value. A barrel of crude is worth about US$60 today (I would’ve guessed much higher!) but raw data is worthless. In fact it’s less than worthless. It cuts into the bottom line.

Imagine you’re a CIO of a big organization. You’re standing in a state-of-the-art data center humming away with endless rows of sleek cabinets packed with the latest server hardware, each hosting tens of thousands of virtual machines running database server software on untold petabytes of storage. The lighting is low and the place pulses with high-tech power. It’s all very bad-ass. But as you walk down the center aisle you approach a wall where there’s a huge LED display with a seven-digit number spinning out of control – an amount representing the net cost in millions of dollars per year spent storing all this data Your gut tightens as you fathom the volumes of data flooding the data center and the costs spinning straight to hell. Those numbers are burning into your retinas as you stare up at the LED. Your face is about to melt off like the Nazis in the climactic scene in Raiders of the Lost Ark. But you make your wisdom saving throw and recover just in time, calling HR and telling them to hire some data professionals now.

The point is it’s not the data that’s valuable; it’s truth, and these days there’s a scarcity of truth. The number of things into which oil can be refined is limited. On the other hand data can be molded into any information that is somewhere on the scale between insanely valuable and totally useless. Transforming data must be a flexible, adaptive process or its value is never realized. Somewhere in this mountain of data are facts, the rarest nuggets in the world.

Admittedly, my professional view spells big data with a little “b”. What I work with is nothing compared to the vast oceans of Big Data processed by Silicon Valley powerhouses and the internet of things. Big Data in this sense may very well be the new oil for a handful of tech companies, but anyone who has flown from Houston to Galveston knows the impact oil refinery can have on the environment. The data centers that store all this data use a lot of juice, the production of which also affects climate. Social Media also outputs a massive amount of social pollution unchecked. Looking at the bigger picture, “Data is the New Oil” can also infer that everyone’s data is valuable and every individual should be getting rich, too. In the future there will be stronger, well-defined data rights to support this, but for now it’s delusional and dangerous thinking, a point made in the excellent blog post, “Data isn’t the new oil, it’s the new CO2,” by Martin Martin Tisné, managing director at Luminate, a philanthropic organization I follow. I plan on writing more about Luminate in the future, as well as Mr. Tisné’s article in the MIT Technology Review, “It’s time for a Bill of Data Rights”.

Categories
tech

Best Toilets on Earth

There are many things I love about Japan, and one of them their cleanliness and centuries-old devotion to good hygiene. The Japanese are known for their sensible protocols for staying clean, like no wearing shoes in the house and showering before baths. They also have the best toilets in the world.

When Commodore Matthew Perry sailed his “black ships” into Yokohama harbor in 1852, the level of tech he brought with him must have terrifying to the Japanese, who were still running around in robes, carrying katana and spears. At the time, the Japanese were centuries behind in war tech, but they were centuries ahead of the Westerners when it came to clean.

The comedian Ron White jokes that the most luxurious items in his Beverly Hills home are the Japanese toilets. He got so accustomed to the toilet lid opening as he approached that he defiantly pissed all over traditional toilets when they didn’t obey.

In Japan these “luxury items” are standard in every house, and the motion sensors are the least of their awesome perks. The most beneficial feature is the bidet. Toilet paper only requires a light padding to dry off, and you’re done. No repetitive wiping with course, dry paper. An Indian comedian, Hasan Minhaj, accurately observes that wiping a dirty ass with dry paper is the most ineffective way of cleaning. If you stepped in dog shit would you clean your shoe with a dry cloth? No, you’d run water over it to clean it off. Many cultures have adopted a moist towel approach to wiping, but the Japanese built-in bidet is far superior.

Some of the toilet side-arm control panels can be bewildering. I still don’t know everything our toilets can do.

There are many other wonderful features of the standard Japanese toilet, like UV light to sterilize the toilet bowl after you finish, and warm toilet seats that keep your butt warm on winter mornings. The toilets are more resource-friendly, offering the option for small or large flush. Each toilet has a control panel, either on the arm rest (yes, arm rest) or mounted on the wall. Some of the display panels can be bewildering. There are options to adjust seat warmth, water pressure, nozzle position, energy saving mode, deodorizer, and so on. I still don’t know everything our toilets can do. Japanese toilets are not only the cleanest and most comfortable, they’re healthier for the butt, too. These toilets are a game-changer. After experiencing this beneficial tech, there’s no going back.

Categories
social media tech

The Roaring Twenties

On New Year’s Eve 1979, I was eleven, cozy in my pajamas after a nice hot bath. I lay on my belly on the carpet of my grandparents’ living room, filling the pages of a sketch pad with dreams of the new decade to come. The Eighties were going to be kick-ass!

My grandparents lounged in lazy boy chairs and smoked as we watched an episode of M*A*S*H. Later we watched Lou Grant. I know these details now because I still have the sketch pad, and I’m reading the notes. (I just confirmed these shows were broadcast on the day in question, although there’s no mention of what we watched in the interim one hour between the shows.)

I didn’t take notes of every evening, but it was the first time I was conscious of entering a new decade. I drew spaceships and electric cars (the kind of stuff Elon Musk works on now, in real life), and other beneficial tech that would no doubt improve our lives in the future. Technology was exciting and cool. It would have been impossible to imagine an uncontrolled psychological experiment performed on the human race for the purpose of selling products and ideas, let alone draw it in my sketch pad. The inability to conceive of such a thing then is why many people were slow to understand the negative impact of social media today.

(I didn’t intend for this to be another post about the negative impact of social media, but it’s useful as a way of further defining what I mean by beneficial tech.)

Flash back to the scene of me watching TV with my grandparents on the last day of 1979. This was the golden era of TV, when there were just three competing networks who were in business to sell advertising. At the start of each show there was a commercial break, and another cluster of ads every fifteen minutes thereafter, with a few national ads and maybe a couple slots for local stuff. There were probably twenty or thirty ads aired during the time my grandparents and I watched TV that night, but nobody was paying attention. Commercials were for socializing, grabbing a bowl of ice cream, or taking a bathroom break. TV shows like M*A*S*H were not created for the purpose of selling things. Their entertainment value was incidental to the advertisement business that supported the platform on which they aired.

On the surface it looks like the content of Facebook is also incidental to its ads. But the amount of content is infinite, and “the algorithm” chooses what each individual sees. Put back in the TV age, this would be like everyone watching a slightly different version of the same show. At some point during an episode of M*A*S*H, Hawkeye would pause, look directly at me, and wink, holding the toy I hadn’t received for Christmas. “Still want one of these?” On an emotional level that’s how Facebook works.

Except it takes time for this magic to do its thing. The tech needs to learn every nuance of our behavior and moods. This requires hundreds of hours of our attention. In order to harvest the maximum amount of attention, social media companies employ armies of psychologists and developers to engineer addiction. The idea is to silo people into easily-marketable groups, and the best way to do this is to get them hooked and incite emotional reactions. It just so happens the easiest human emotions to tweak are all negative: anger, rage, hatred, and fear. Multiply this effect by billions of tweaked individuals and the result is a world of angry, depressed, divided masses who view the people on the other side of their engineered, bipolar worldview as sub-human.

This sucks.

It doubly sucks because on almost every measurable level we’re living in the best time in all of human history.

What to do? It doesn’t matter if someone doesn’t use social media, because everyone else does. Anyone who abstains from the madness is breathing in toxic fumes, second-hand-smoke.

My grandparents were cool, not because the commercial brainwashing of the day had convinced me that smoking was cool (it very likely had), but because they were fun people and I loved them. They loved me too, and would’ve never intended to do me harm; but one side-effect of hanging out with them was a continual lungful of second-hand smoke. At the time, I didn’t mind the smoke. Nobody did. It was 1979.

Is social media even less beneficial than cigarettes? On the micro level there are no doubt positive things that happen on Facebook and other platforms, but it’s possible to argue this point. Cigarettes were intentionally addictive and non-smokers breathed in the harmful byproduct. Forty years later, a technology called social media is intentionally addictive and non-users experience harmful byproducts. But the difference is that social media is also intentionally toxic. Cigarettes were incidentally toxic. If there had been a way for tobacco companies to engineer their product so that it didn’t kill their customers, then I’m sure they would’ve done it. Social media companies are colossal advertising platforms that purposely divide people, and mass-produce negativity for the purpose of selling products and ideas.

Should social media be regulated like tobacco? Maybe, but we’d need new laws. Anti-trust litigation isn’t the right tool. Alphabet and Facebook are indeed monopolies. (Their combined market capitalization is $1.3 trillion dollars as of last year, and no one else is even close.) But monopoly is not the problem. The problem is that social media companies are wielding unchecked control over a dangerous technology that does measurable harm to society. How is this not the most alarming thing in the world? Would it be beneficial to humanity if this market became more competitive through government anti-trust intervention? I don’t think so. Instead, the ad-based business model needs to go.

Social media should be sold as a premier service. In the past thirty years companies like HBO and Netflix obliterated the ad-based model of the big three TV networks. It turned out there was a big market for quality entertainment. The result was Peak TV. Shows like M*A*S*H were great, but any given show on Netflix today would be the best show on TV forty years ago. This further defines what I mean by beneficial tech.

A hundred years from now historians will look back on this time and see early social media as humanity’s first big mistake with AI. The technology is at a nascent stage. It can be a bridge to something better down the road.

So what about my dreams of a kick-ass decade? The Eighties turned out to be filled with a mix of miracles and personal tragedies, like any decade for anyone. There was also an explosion of fantastic entertainment and exciting change. I’m optimistic about the upcoming decade.

Let the Roaring Twenties begin!