From Nothing to Something with Minimum Viable Documentation

More and more startups and enterprises are recognizing the importance of high quality product documentation, but it’s tough to know where to start. I’ve taken a few enterprise software products from “nothing to something” documentation and this is the framework I’ve built for myself to create MVD—minimum viable documentation. 

Diagram using a dotted line circle with an arrow toward a pink shaded box with "MVD" inside to represent going from nothing to minimum viable documentation.

If you’re a technical writer trying to find your footing, or someone who cares about adding user documentation for your software but have no idea where to start, this is the guide for you.

What is minimum viable documentation?

If documentation is a product (and it is), minimum viable documentation is the bare minimum documentation that is useful and helpful to customers

Something good is better than something chaotic and unhelpful, and it’s much better than no documentation at all. It’s also easier to focus on getting to minimum viable documentation rather than trying to reach full-featured documentation as soon as possible, because you’re a human with a life that is not your job.

Venn diagram with overlapping circles of Helpful, Useful, and Quick intersecting to form MVD.

You might be working with a fully-functional software product that has no useful documentation. In that case, getting to full-featured documentation isn’t your primary goal—getting to minimum useful documentation is. So let’s get started.

Define minimum viable documentation for your product

Before you can write MVD, you need to define what it is for your product. MVD differs depending on your market, customer base, product type, your pricing structure, and more. 

I recommend you do the following to define what MVD looks like for your product. 

1. Talk to your colleagues

Your goal with these conversations is to get a good understanding of who the target user is for your product and the goals they want to accomplish with your product.

Diagram with 5 circles, 1 each representing PM, UX, Docs, Engineering, and Marketing.

If you have product management, start with them. Find out as much as you can about why the product is being built, who it’s for, and how the product is being positioned in the market. 

Also talk to engineering management or senior engineering subject matter experts (SMEs). What user problems are the software trying to solve? What level of expertise do the engineers assume the user has? 

If you’re lucky enough to have a sales or marketing team, talk to them. Because of their efforts defining the customer journey, they can help you understand who the audience is and what the key success workflows look like. Who is the product targeting? Why do they want to use this product? What problems are they trying to solve?

Talk to the user experience designers to get an understanding of the user personas they’re designing for and what workflows they think have the most friction. You can also get a sense for how the team approaches their role, whether they’re more focused on designing friction-free workflows or pixel-perfect screens. 

After talking to PM, EM, UX, and marketing, you can do the following:

  • Identify what level of expertise a typical product user has, both with the domain and with the product. This functions as your audience definition.
  • Write down the main goals of a user before and after they start using your product. What motivates the user?
  • Map out the key workflows that a user is going to perform in the product. What tasks are the user trying to accomplish?

2. Perform a documentation competitor review

It’s always a good idea to know what your competitors are doing! If you’re not sure what products your product is competing with, ask your sales or marketing people for a list or do some research on your own. 

Pick 3-5 companies to focus on, such as your strongest competitors in terms of funding, usage, or closeness to what your product does. 

You also want to make sure you’re not benchmarking off useless garbage when you perform your competitor review. In addition to the 3-5 competitors you identify, pick a couple industry leaders or companies that your colleagues mention as having good documentation, such as Stripe API docs, Microsoft docs, or even IBM docs, and include one or two of them in your competitor analysis. 

The advantage of choosing the documentation of a couple larger products to review is that they tend to have established documentation teams and offerings in a variety of markets. This makes it easier to find a product that is well-documented and at least somewhat adjacent to what your product does.

Diagram using different colored shaded rectangles to represent competitor documentation, with an arrow pointing to MVD.

The goal of your competitor analysis is to identify how the companies provide documentation about their product(s). Pay attention to the following:

  • How is the documentation structured?
    • By feature?
    • By use case?
    • By persona?
  • What documentation is provided?
    • What workflows are covered? 
    • What seems left out?
  • What type of documentation is it?
    • Lots of conceptual information about how the product works?
    • Heavy reference information but light on the how-to tasks?
  • Who is the documentation targeting?
    • Look for introductory content or tutorials
    • Is there advanced developer content? 
  • How is the documentation site built?
    • Use the inspect element option in your web browser or a site like https://builtwith.com/ to figure out what technology the documentation site is built with.
  • Anything else interesting?
    • Does a company have an interesting way of differentiating beta functionality?
    • Are code samples hidden behind toggle-to-expand options?
    • Is there a plethora of gifs, videos, or other multimedia in the documentation?

Document your findings, of course, and feel free to share your findings with your team during a demo. After all, they might be wondering what you’re doing if you haven’t started writing yet. 

3. Assess the current state of your documentation

If there is any sort of documentation for your product, you want to know what it is. It might be a sad README and some code comments, it might be detailed multilayered documentation without much organization or clear goals. 

Diagram with a transparent dotted circle pointing to {} encasing a column chart, circles with customers, marketing, UX, engineering, and PM, plus a dotted outlined rectangle with "existing content?" inside pointing to an MVD square, representing the pre-planning process.

To get a sense of the current state, I recommend doing the following:

  • Audit the existing content. Identify which topics are covered in the documentation already, and where. Make a list, and also keep track of what topics seem to have a lot of detail, and what you suspect might be outdated. This is a cursory audit, not an in-depth one that you might perform if you were migrating content.
  • Look at the documentation analytics. If you have analytics for the documentation site, take note of which pages are most frequently viewed, which pages might be serving as entry points, and how much time people spend on various pages. 
  • Talk to your team and get their thoughts on the current documentation. Who has been writing it so far? Are they attached to any topics in particular? Do they share specific topics with customers regularly?
  • Interview customers of the product and documentation to see what they want to see or find most useful today.

Depending on the quality of the existing documentation, these steps might not be that helpful in informing your approach, but help you set benchmarks for documentation growth and quality, plus identify links you likely don’t want to break.

If you don’t have any documentation, still talk to your team and customers. If you can’t talk to customers for some reason, you can look for discussions about the product on social media like Reddit, Twitter, or Hacker News to identify themes that people ask questions about or really enjoy about your product.

A brief note about terminology: As you review competitor and existing documentation and interview internal and external folks, you might find that your product has some inconsistent terminology. At this stage, you might want to delay the writing process while you create a definitive list of terms to use for the product. This type of work can take more time upfront but it’s easier to create consistency from the beginning than to apply it after the fact. 

Define the structure of your documentation

Before you start writing, you want to create a structure or a framework to place your topics into. 

Diagram with the empty circle pointing to shaded rectangles structured in a hierarchy of three chapters, one with 3 topics below it, one with 2, and another with 4, all pointing to the MVD shaded square.

The structure for your MVD is directly informed by the work you did to define what MVD looks like for your product, plus some information-architecture-specific research. 

  • Revisit your conversations with colleagues. What workflows and functionality might be important to highlight? Who is buying your product? Who is using your product?
  • Refer to your competitor review notes. How did your competitors and benchmark docs structure their documentation?
  • Research information architecture best practices. Refer to some key articles from the Nielsen Norman Group, as well as the book How to Make Sense of Any Mess by Abby Covert, and the associated worksheets.

After this research, draft up some chapter headings and possible topic titles to start with, then get feedback from your UX, PM, Engineering, and Sales and Marketing folks. How accurate, relevant, or helpful does the new structure seem? Have you made any assumptions that don’t make sense for the customer base?

Expect this information architecture to change as you write the MVD and especially as you develop full-featured documentation. This is the nature of a minimum viable product! Put a task in your backlog to plan to refine the structure after you finish MVD and are approaching full-featured documentation so that you can iterate without confusing your customers with frequent changes, and plan so that you don’t break any links.

After you design the initial information architecture, you can start writing. 

Start writing minimum viable documentation

So you know what minimum viable documentation might look for for your product, but how do you get there? MVD is all about creating useful content for your users, so start with the entry content! 

Venn diagram with three circles, one with identify key information, one with describe the path to success, and a third with clarify complexity, with MVD at the intersection of all three.

Focus on key information for customers

As with any “minimum viable” approach, you’re trying to get a basic functional framework down before you start improving it. As you lay that framework, be mindful of scope creep.

Think back to the key workflows that you mapped out earlier. Broadly cover the top few workflows and then flesh out details as you get more comfortable with the product and understand the user goals better. Why go broad instead of going deep into a specific workflow? You’re still learning what the customer finds useful, and what level of detail they might want or need about a specific workflow. 

If you spend a lot of time writing a highly detailed workflow that you thought was important and it turns out it’s actually pretty intuitive for customers—that’s time that you could have used to write about something that was really confusing and holding back customers from succeeding with your product. 

It’s likely that you’ll encounter cases and situations that you want to write more about. That’s great! Write them down and put them in a backlog to address later. For now, you want to stay focused on these minimal workflows to build out the minimum viable documentation for your product. You can get fancy with use cases and in-depth examples later. 

Identify the simplest path to success

Within those broad key workflows, start with the simplest path to success, the “happy path” that most of your customers will take. 

That might involve writing a series of topics like:

  • “Get started using Best Product Ever”
  • “Install Best Product Ever”
  • “Set up Best Product Ever”
  • “Accomplish Straightforward Task in Best Product Ever”. 

Get those written, reviewed, and published and start helping people use your product that much sooner. 

Clarify any complexity

After you write the documentation to support the simple path to success, what do you write next? Documentation that unravels where complexity lurks in your product. 

Depending on your product familiarity, you might need to take more time to research and lean on technical subject matter experts (SMEs) a bit more to write this, but it’s worth it. This documentation content might be topics like:

  • “Configure the Weird Setting You Must Touch”
  • “All About This Task That Everyone Wants to Do but No One Can Find”.

You don’t want to get bogged down in documenting around product complexity here. Stay focused on the complex aspects of the key customer workflows, and the crucial information customers need. What might confuse someone if you left it out? What assumptions have you been making about the user that need to be made explicit? 

This is often the step when I remember to write things like software requirements, role-based restrictions to functionality, or other crucial cases that are often assumed when developers write their own documentation.

Get feedback and iterate

I assume you’re focusing on minimum viable documentation because you have more work than you have time to complete it. That’s why it’s important to iterate. Yes, I just harped on the importance of prioritization and focus—and it’s essential to make sure that what you prioritize and focus on is still important. 

Diagram showing an MVD shaded rectangle with an arrow pointing across to circles with PM, engineering, and customers, then another arrow pointing back to MVD, to emphasize the importance of a feedback loop for your MVD.

Check in with product management and engineering management regularly (I’d recommend weekly for an every-few-months or less release cadence) about what you’re prioritizing and why. 

This check-in is mostly about getting signoff and validation, not direction—but don’t ignore the direction that PM and EM can offer you! If there are important releases coming up that will affect one of the key workflows on your list, you might want to document that workflow sooner, or in more detail than you might otherwise for MVD. 

Use these conversations as a way of discovering what customers are paying attention to, and what your PM and engineers are paying attention to as well. 

As you send your documentation out for technical review, you might also get feedback that you can use to improve your approach to MVD. With any luck, much of the feedback will duplicate what you have planned—and that’s helpful validation for your approach.

You might get so much feedback that you have to dump a lot of ideas into “plans to write this later” and a backlog that feels like it’s spiralling out of control, but if you stay focused on your scope, you’ll get to that backlog sooner and with a more comprehensive understanding of your documentation and your customers. 

If the direction and feedback you get from your team is pretty far removed from your approach to MVD, it’s helpful to discuss why that might be and treat it as prioritization guidance for your future plans. Maybe you misunderstood a key target customer, or the purpose of the product in the market. You might discover you need to realign your understanding and vision of the documentation with that of your team. 

What’s next after MVD?

When do you know that you’ve reached minimum viable documentation? It’s somewhat of a fuzzy line. When you notice that you’re writing documentation by adding to existing topics, or writing net new example content, or documenting new features instead of existing features — you’ve moved past MVD and into shaping full-featured documentation. 

As you start shifting into that mode, you’re no longer focused on creating the skeleton structure to build off of, but filling in the details and settling into the usual work of modern technical writing.

Shaded MVD box pointing to {} boxes emphasizing the headers that follow, work through backlog, improve product, create examples, collect feedback, review analytics, all pointing now to a filled in square labeled full featured documentation.

1. Go through the backlog

Start going through your backlog of ideas. Revisit those ideas and group similar ones together, adding audience definitions, acceptance criteria, and learning objectives where you can. Note who the technical SMEs are and whether any upcoming releases are relevant for some of the tasks. 

Ideally, you’re storing this backlog in the same spot as your engineering backlog so that your work is visible to the engineering team. 

Work with PM or EM to prioritize those tasks and start working through them. As any writer for a fast-paced development team knows as well, the product development often happens faster than you can write about it, so you’ll never run out of tasks in your backlog.

2. Suggest product improvements

As you went through a flurry of documentation writing to produce MVD, you likely identified some parts of the product that might need to be improved. Again, work with your PM and engineering teams to discuss possible product improvements. 

You can also suggest product improvements that directly involve the docs, doing a review of UI text in the product, or auditing pages in the product to suggest opportunities for in-app documentation or context-sensitive help. This is a great opportunity to partner with the UX team as well. 

Partner with your engineering and UX teams to make suggestions and build those relationships based on your newfound product and customer expertise. 

3. Write use cases and examples

To create more useful content for your customers, you probably want to flesh out specific example scenarios for using your product. You might have written some already as quick start use cases for getting started with your product, but you likely want to write more for the next stage of customer product understanding.

You can use example content to describe customization options for the product, or highlight domain-specific use cases for a market that your customer might be trying to break into. 

4. Ask for feedback

You put all this effort into creating minimum viable documentation, but how viable is it really? 

Ask your technical SMEs, sales and marketing teams, customers, really anyone that might interact with the documentation internally or externally if they have feedback on your documentation improvements and information architecture. 

You could perform some tree testing with the MVD structure to see if there are some improvements you can make to the information architecture as you flesh out the documentation, or just have short conversations with stakeholders. 

Use the feedback you get to help shape priorities for your backlog. However, don’t treat all feedback you get as tasks that you must perform—if someone asked for it, it must be important, right?

Instead, validate feedback against your target audience definitions and user goals. Sometimes you’ll get feedback relevant only to a specific edge case that doesn’t make sense to document in the official documentation, or feedback related to a product bug that isn’t something necessarily appropriate to address in the documentation. 

5. Review analytics

Review documentation site analytics. Analytics are an imperfect source of feedback, but as long as you established a prior benchmark, check to see if the entry-level pages that you created or updated are the most popular pages. 

  • Are the pageviews higher, or at least somewhat proportional to the user base of your product? 
  • Are there any surprising outlier pages that have a lot of views that you might want to focus on? 
  • What search terms are popular? 

You can use these to inform your plans and priorities

Get from nothing to something with MVD

It can be intimidating to create a set of documentation for a product from scratch, but I hope this post outlines a basic approach that can help.

Diagram showing an empty circle with a dotted line border and an arrow pointing to a pink shaded square labeled MVD, which points to a larger pink filled in square labeled full featured documentation.

Start by defining what MVD looks like for your product by talking to colleagues, performing a competitor review, and assessing the current state of documentation. Then do some additional research and define the initial structure of your documentation. 

After you’ve laid the groundwork, start writing. Focus on key information for customers and identify the simplest path to success. Clarify any product and task complexity, and seek out feedback. Regularly make changes to what you’ve written as you learn more about the product and your customers. 

As you evolve beyond MVD to full-featured documentation, work through your backlog, suggest product improvements, write use cases and examples, and continue asking for feedback. You can also review site analytics to get a sense of how far you’ve come and what you might want to focus on next. 

Whether you’re a professional technical writer, a committed startup founder, a generous open source contributor, or someone else, I hope you can use this framework to document your software product.

I tried my best to create a minimum viable blog post to describe this minimum viable documentation framework. As such, I might not have gone into much depth about how to perform a competitor review, get buy-in for terminology proposals, or how to handle the full range of feedback you might receive on your documentation. 

If you have feedback or questions for me, or want to see more details about a specific topic, don’t hesitate to reach out on Twitter @smorewithface

How can I get better at writing?

As a professional writer, I frequently get asked, “as a ______, how can I get better at writing?” I’ve never had a good list of resources to point people to, so I finally decided to write one. I’ve worked hard to become a good writer, and I’ve had the privilege of many good teachers along the way.

If you’re not really sure why your writing isn’t as good as you want it to be, that’s okay. In this blog post, I’ve identified the strategies that I use to write well. I hope they’re useful to you. 

Where to start

Read and write more frequently. You can’t get better without good examples or practice. If you want to get better at writing you need to read more and you need to write more. 

Identify what you’re trying to improve. Maybe you struggle with grammar, or in clearly communicating your ideas. Maybe it takes too many words for you to get your point across, or you can’t quite connect with the people reading your writing. 

Write accurate content by improving your grammar and word choice

Use a tool like Grammarly, or enable grammar checking in whatever tool you use to write, if it’s available. If you don’t want a mysterious AI reading your writing, you can use other resources to improve specific aspects of your grammar.

Some key concepts to focus on:

I still struggle with the following (more pedantic) grammar rules: 

  • When do I need to use a hyphen to connect two words? See Hyphen Use, on the Purdue Online Writing Lab website. 
  • Did I split an infinitive? What is a split infinitive, anyway? See Infinitives, on the Purdue Online Writing Lab website. 
  • Does my relative pronoun actually clearly refer to something or do I have a vague “that” or “it”? See Pronouns in the Splunk Style Guide.

The somewhat silly yet practical book, Curious Case of the Misplaced Modifier by Bonnie Trenga, might also be a useful read.  

Write helpful content by defining outcomes before you start

Before you start writing something, whether it’s a slide deck, an engineering-requirements document, an email, or a blog post like this one, consider what you want someone to do after reading what you wrote. 

Often called learning objectives or learning outcomes in instructional design, defining outcomes can help you write something useful and focused. Sometimes when you’re writing something, other extraneous ideas come to mind. They can be valuable ideas, but if they distract from your defined outcomes, you might want to remove them from your main content.

Some example outcomes are:

  • After reading this blog post, you can confidently draft a clear document with defined outcomes.
  • After reading this engineering requirements document, my colleague can provide accurate and helpful architecture feedback on the design. 
  • After reading the release notes, I can convince my boss that the new features are worth an immediate upgrade. 

I also want to note that if you write an outcome focused on someone understanding something, rewrite it. It’s tough to measure understanding. It’s easier to measure action. For that reason, I try to write outcomes with action-oriented verbs. For more about writing good learning objectives, see the Learning Objectives chapter in The Product is Docs.

Write focused content by identifying your audience

Who will be reading your writing? What do they know? Who are they? What assumptions can you make about them? 

If you can’t answer these questions about the people reading your writing, you won’t be able to clearly communicate your ideas to them. You don’t have to be able to answer these questions with 100% certainty, but make the attempt. 

If you recognize that you’re writing something for multiple audiences, consider breaking up the content into specific sections for each audience. For example, architects might care about different content than a UI engineer, a product manager might care about different details than the backend engineer. 

If you identify the different needs of your varying audiences, you can write more consistently for each specific audience, rather than trying to address all of them all the time. For more on identifying your audience, see the Audience chapter of The Product is Docs.

Write findable content by considering how people get to it

How people get to your content can influence how you write it. If people use search, an intranet, or direct links to find your content, you might make different decisions about how to structure it. 

I always assume that people are finding my content by searching the web. They’ve typed a specific search query, found my content as a result, and open it with the hopes that it is the right content for them. 

Consider what people are searching for that can be answered by your content, and write a title accordingly. Spend time on the first few sentences of your content to make sure that they further clarify what your content addresses. 

For example, I titled this blog post “How can I get better at writing?” because I expect that’s what a lot of people might type into their preferred search engine out of desperation. I could call it “7 quick tips to improve your writing”, but that’s not how most people type search queries (in my opinion).  

Mark Baker’s book, Every Page is Page One, covers a lot of information related to this concept. He coins the term “information scent” to describe the signals that indicate to a person that they’ve found the right content to answer their question, and “information foraging” to describe the process of looking for the right information. 

Write readable content by considering the structure

People aren’t excited to read technical content or technical documentation. No one rejoices when they get an email. I get paid to write technical documentation and I still avoid reading it if I can. Because people don’t want to read your content, structure it intentionally. 

Write for skimming. Bullet points are often better than paragraphs. Tables are often better than paragraphs. 

Put information where it needs to be. If you’re writing a series of steps, make sure the steps are actually in the right order. For example, if something needs to be done before all the steps can succeed, put it before the set of steps as a prerequisite.

You also want to consider the desired outcomes of your content and your audience when you structure your content. It can make sense to focus on one audience in one piece of content, or one desired outcome in one piece of content. Don’t try to do too much in one piece of writing. 

Nielsen Norman Group has an incredible set of research and recommendations about how people read and how you can structure your content. I recommend the following articles:

Write clear content by intentionally choosing your words

You want to make your content easy to find and easy to understand. To do this, you need to be consistent and intentional about the words that you use.

Use consistent terminology. This isn’t the time to write beautiful prose that uses different words to mean the same thing. Don’t overload terms by using the same term for multiple things, and don’t use multiple terms to refer to one thing. Use the same terms and use them consistently. 

If something is a JSON object, call it that. Don’t call it a JSON object sometimes, a JSON setting other times, or a JSON blob other times. Pick one term and use it consistently. You might have to pick an imperfect term and live with it. It happens! There are only so many words to choose from. 

Be intentional about the words you use. Consider the words that your readers use to describe what you’re writing about, and use the same words if you can. Even if those words don’t match up completely with the feature names in use by your product.

If all of your software’s users refer to “dark mode” instead of “dark theme”, you might need to use both terms in your content so that people can find it. For some internal documentation, you might need to make a mapping of internal names that people use for something with the external names used in the product. 

If you’re not sure what term to use, find out what terms your readers are already using. If you have access to search query logs of your website search, review those for patterns. If you don’t already have readers or users for your product, you can do some competitive analysis to understand what terms are in common usage in the market. 

You can also check the dictionary or use a tool like Google Books Ngram Viewer or Google Trends to identify common terms for what you’re attempting to describe. 

Nielsen Norman Group again has some excellent resources on clear writing:

Write trustworthy content by thinking about the future

Errors in content, especially technical documentation, lead to mistrust. When you write a piece of content, consider the future of the content. 

The future of the content depends on the purpose and type of content that you’re writing. This list contains some common expectations that readers might have about various content types:

  • A blog post has a date stamp and isn’t kept continually updated.
  • Technical documentation always matches the product version that it references.
  • Architecture documents reflect the current state of the microservice architecture.
  • An email gets the point across and can’t be edited after you send it.

You must consider the future and maintenance of any content that you write if your readers expect it to be kept up-to-date. To figure out how difficult maintaining your content will be, you can ask yourself these questions:

  • How frequently does the thing I’m writing about change?
  • How reliable does my content need to be?
  • How quickly does my content need to be accurate (e.g., after a product release)?

By answering these questions, you can then make decisions about how you write your content. 

  • What level of detail will you include in your content?
  • Will you focus your efforts on accuracy, speed, or content coverage?
  • Do you want to include high-fidelity screenshots, gifs, or complex diagrams?
  • Do you want to automate any part of your content creation?
  • Who will review your content? How quickly and thoroughly will they review it?

For more on maintaining content and making decisions about your documentation, see the Documentation Decisions chapter in the book The Product is Docs (which I contributed to). 

Feel empowered to write better content

I hope that after reading this blog post you feel empowered to write more accurate, helpful, focused, findable, readable, clear, trustworthy content. This is an overview of strategies. If you want to dig deeper into a specific way to improve your writing, check out the books and articles linked throughout this post.

If you have something you think I missed, you can find me on Twitter @smorewithface

Detailed data types you can use for documentation prioritization

Data analysis is a valuable way to learn more about what documentation tasks to prioritize above others. My post (and talk), Just Add Data, presented at Write the Docs Portland in 2019, talk about this broadly. In this post I want to cover in detail a number of different data types that can lead to valuable insights for prioritization.

This list of data types is long, but I promise each one contains value for a technical writer. These types of data might come from your own collection, a user research organization, the business development department, marketing organization, or product management organization:

  • User research reports
  • Support cases
  • Forum threads and questions
  • Product usage metrics
  • Search strings
  • Tags on bugs or issues
  • Education/training course content and questions
  • Customer satisfaction survey

More documentation-specific data types:

  • Documentation feedback
  • Site metrics
  • Text analysis metrics
  • Download/last accessed numbers
  • Topic type metrics
  • Topic metadata
  • Contribution data
  • Social media analytics

Many of these data types are best used in combination with others.

User research reports

User research reports can contain a lot of valuable data that you can use for documentation. 

  • Types of customers being interviewed
  • Customer use cases and problems
  • Types of studies being performed

This can give you insight into both what the company finds valuable to study (so some insight into internal priorities) but also direct customer feedback about things that are confusing or the ways that they use the product. The types of customers that are interviewed can provide valuable audience or persona-targeting information, allowing you to better calibrate the information in your documentation. See How to use data in user research when you have no web analytics on the Gov.UK site for more details about what you can do with user research data.

Support cases

Support cases can help you better understand customer problems. Specific metrics include:

  • Number of cases
  • Frequency of cases
  • Categories of questions
  • Customer environments and licenses

With these you can compile metrics about specific customer problems, the frequency of problems, and the types of customers and customer environments that are encountering specific problems, allowing you to better understand target customers, or customers that might be using your documentation more than others. Support cases are also rich data for common customer problems, providing a good way to gather new use cases and subjects for topics. 

Forum threads and questions

These can be internal forums (like Splunk Answers for Splunk) or external ones, like Reddit or StackOverflow.

  • Common questions
  • Common categories
  • Frequently unanswered questions
  • Post titles

If you’re trying to understand what people are struggling with, or get a better sense of how people are using specific functionality, forum threads can help you understand. The types of questions that people ask and how they phrase them can also help make it clear what kinds of configuration combinations might make specific functions harder for customers. Based on the question types and frequencies that you see, you might be able to fine-tune existing documentation to make it more user-centric and easily findable, or supplement content with additional specific examples. 

Product usage metrics

Some examples of product usage metrics are as follows:

  • Time in product
  • Intra-product clicks
  • Types of data ingested
  • Types of content created
  • Amount of content created

Even if you don’t have specific usage data introspecting the product, you can gather metrics about how people are interacting with the purchase and activation process, and extrapolate accordingly.

  • Number of downloads and installs
  • License activations and types
  • Daily and monthly active users

You can use this type of data to better understand how people are spending their time in your product, and what features or functionality they’re using. Even if a customer has purchased or installed the product, it’s even more valuable to find out if they’re actually using it, and if so, how.

If your product is only in beta, and you want more data to help you prioritize an overall documentation backlog, such as topics that are tied to a specific release, you can use some product usage data to understand where people are spending more of their time, and draw conclusions about what to prioritize based on that.

Maybe the under-utilized features could use more documentation, or more targeted documentation. Maybe the features themselves need work.  Be careful not to draw overly-simplistic conclusions about the data that you see from product usage metrics. Keep context in mind at all times. 

Search strings

You can gather search strings from HTTP referer data from web searches performed on external search sites such as Google or DuckDuckGo, or from internal search services. It’s pretty unlikely that you’ll be able to gather search strings from external sites given the widespread implementation of HTTPS, but internal search services can be vital and valuable data sources for this.

Look at specific search strings to find out what people are looking for, and what people are searching that’s landing them on specific documentation pages. Maybe they’re searching for something and landing on the wrong page, and you can update your topic titles to help.

JIRA or issue data

You can use metrics from your issue tracking services to better understand product quality, as well as customer confusion.

  • Number of issues/bugs
  • Categories/tags/components of issues/bugs
  • Frequency of different types of issues being created/closed

Issue tags or bug components can help you identify categories of the product where there are lots of problems or perhaps customer confusion. This is especially useful data if you’re an open source product and want to get a good understanding of where there are issues that might need more decision support or guidance in the documentation. 

Training courses

If you have an education department, or produce training courses about your product, these are quite useful to gather data from. Some examples of data you might find useful:

  • Questions asked by customers
  • Questions asked by course developers
  • Use cases covered by content in courses
  • Enrollment in courses
  • Categories of courses offered

Also useful to correlate this with other data to help identify verticals of customers interested in different topics. Because education and training courses cover more hands-on material, it can be an excellent source of use case examples, as well as occasions where decision support and guidance is needed. 

Customer surveys

Customer surveys especially cover surveys like satisfaction surveys and sentiment analysis surveys. By reviewing the qualitative statements and types of questions asked in the surveys, you can gain valuable insights and information like:

  • What do people think about the product?
  • What do people want more help with?
  • How do people think about the product?
  • How do people feel about the product?
  • What does the company want to know from customers? 
  • What are the company priorities?

This can also help you think about how the documentation you write has a real effect on peoples’ interactions with the product, and can shift sentiment in one way or another.

Documentation feedback

Direct feedback on your documentation is a vital source of data if you can get it. 

  • Qualitative comments about the documentation
  • Usefulness votes (yes/no)
  • Ratings

Even if you don’t have a direct feedback mechanism on your website, you can collect documentation feedback from internal and external customers by paying attention in conversations with people and even asking them directly if they have any documentation feedback. Qualitative comments and direct feedback can be vital for making improvements to specific areas. 

Site metrics

If your documentation is on a website, you can use web access logs to gather important site metrics, such as the following:

  • Page views
  • Session data like time on page
  • Referer data
  • Link clicks
  • Button clicks
  • Bounce rate
  • Client IP

Site metrics like page views, session data, referer data, and link clicks can help you understand where people are coming to your docs from, how long they are staying on the page, how many readers there are, and where they’re going after they get to a topic. You can also use this data to understand better how people interact with your documentation. Are readers using a version switcher on your page? Are they expanding or collapsing information sections on the page to learn more? Maybe readers are using a table of contents to skip to specific parts of specific topics.  

You can split this data by IP address to understand groups of topics that specific users are clustering around, to better understand how people use the documentation.

Text analysis metrics

Data about the actual text on your documentation site is also useful to help understand the complexity of the documentation on your site.

  • Flesch-Kincaid readability score
  • Inclusivity level
  • Length of sentences and headers
  • Style linter

You can assess the readability or usability of the documentation, or even the grade level score for the content to understand how consistent your documentation is. Identify the length of sentences and headers to see if they match best practices in the industry for writing on the web. You can even scan content against a style linter to identify inconsistencies of documentation topics against a style guide.

Download metrics

If you don’t have site metrics for your documentation site, because the documentation is published only via PDF or another medium, you can still use metrics from that. 

  • Download numbers 
  • Download dates and times
  • Download categories and types

You can use these metrics to gather interest about what people want to be reading offline, or how frequently people are accessing your documentation. You can also correlate this data with product usage data and release cycles to determine how frequently people access the documentation compared with release dates, and the number of people accessing the documentation compared with the number of people using a product or service.

Topic type metrics

If you use strict topic typing at your documentation organization, you can use topic type metrics as an additional metadata layer for documentation data analysis. Even if you don’t, you can manually categorize organize your documentation by type to gather this data.

  • What are the topic types?
  • How many topic types are there?
  • How many topics are there of each type?

Understanding topic types can help you understand how reader interaction patterns can vary for your documentation by type, or whether your developer documentation has predominantly different types of documentation compared with your user documentation, and better understand what types of documentation are written for which audiences.

Topic metadata

Metadata about documentation topics is also incredibly valuable as a correlation data source. You can correlate topic metadata like the following information:

  • What are the titles?
  • Average length of a topic?
  • Last updated and creation dates
  • Versions that different topics apply for

You can correlate it with site metrics, to see if longer topics are viewed less-frequently than shorter topics, or identify outliers in those data points. You can also manually analyze the topic titles to identify if there are patterns (good or bad) that exist.

Contribution data

If you have information about who is writing documentation, and when, you can use these types of data:

  • Last updated dates
  • Authors/contributors
  • Amount of information added or removed

Contribution data can tell you how frequently specific topics were updated to add new information, and by whom, and how much information was added or removed. You can identify frequency patterns, clusters over time, as well as consistent contributors.

It’s useful to split this data by other features, or correlate it with other metrics, especially site metrics. You can then identify things like:

  • Last updated dates by topic
  • Last updated dates by product
  • Last updated dates over time

to see if there are correlations between updates and page views. Perhaps more frequently updated content is viewed more often.

Social media analytics

  • Social media referers
  • Link clicks from social media sites

If you publicize your documentation using social media, you can track the interest in the documentation from those sites. If you’re curious about social media referers leading people to your documentation, and see whether or not people are getting to your documentation in that way. Maybe your support team is responding to people on twitter with links to your documentation, and you want to better understand how frequently that happens and how frequently people click through those links to the documentation…

You can also identify whether or not, and how, people are sharing your documentation on social media by using data crawled or retrieved from those sites’ APIs, and looking for instances of links to your documentation. This can help you get a better sense of how people are using your documentation, how they’re talking about it, how they feel about it, and whether or not you have an organic community out there on the web sharing your documentation. 

Beyond documentation data

I hope that this detail has given you a better understanding of different types of data, beyond documentation data, that are available to you as a technical writer to draw valuable conclusions from. By analyzing these types of data, you are prepared for prioritizing your documentation task list, but also better able to understand the customers of your product and documentation. Even if only some of these are available to you, I hope they are useful. Be sure to read Just Add Data: Using data to prioritize your documentation for the full explanation of how to use data in this way. 

The Concepts Behind the Book: How to Measure Anything

I just finished reading How to Measure Anything: Finding the Value of Intangibles in Business by Douglas Hubbard. It discusses fascinating concepts about measurement and observability, but they are tendrils that you must follow among mentions of Excel, statistical formulas, and somewhat dry consulting anecdotes. For those of you that might want to focus mainly on the concepts rather than the literal statistics and formulas behind implementing his framework, I wanted to share the concepts that resonated with me. If you want to read a more thorough summary, I recommend the summary on Less Wrong, also titled How to Measure Anything.

The premise of the book is that people undertake many business decisions and large projects with the idea that success of the decisions or projects can’t be measured, and thus they aren’t measured. It seems a large waste of money and effort if you can’t measure the success of such projects and decisions, and so he developed a consulting business and a framework, Applied Information Economics (AIE)to prove that you can measure such things.

Near the end of his book on page 267, he summarizes his philosophy as six main points:

1. If it’s really that important, it’s something you can define. If it’s something you think exists at all, then it’s something that you’ve already observed somehow.

2. If it’s something important and something uncertain, then you have a cost of being wrong and a chance of being wrong.

3. You can quantify your current uncertainty with calibrated estimates.

4. You can compute the value of additional information by knowing the “threshold” of the measurement where it begins to make a difference compared to your existing uncertainty.

5. Once you know what it’s worth to measure something, you can put the measurement effort in context and decide on the effort it should take.

6. Knowing just a few methods for random sampling, controlled experiments, or even just improving on the judgment of experts can lead to a significant reduction in uncertainty.

To restate those points:

  1. Define what you want to know. Consider ways that you or others have measured similar problems. What you want to know might be easier to see than you thought.
  2. It’s valuable to measure things that you aren’t certain about if they are important to be certain about.
  3. Make estimates about what you think will happen, and calibrate those estimates to understand just how uncertain you are about outcomes.
  4. Determine a level of certainty that will help you feel more confident about a decision. Additionally, determine how much information will be needed to get you there.
  5. Determine how much effort it might take to gather that information.
  6. Understand that it probably takes less effort than you think to reduce uncertainty.

The crux of the book revolves around restating measurement from “answer a specific question” to “reduce uncertainty based on what you know today”.

Measure to reduce uncertainty

Before reading this book, I thought about data analysis as a way to find an answer to a question. I’d go in with a question, I’d find data, and thanks to that data, I’d magically know the answer. However, that approach only works with specifically-defined questions and perfect data. If I want to know “how many views did a specific documentation topic get last week” I can answer that straightforwardly with website metrics.

However, if I want to know “Was the guidance about how to perform a task more useful after I rewrote it?” there was really no way to know the answer to that question. Or so I thought.

Hubbard’s book makes the crucial distinction that data doesn’t need to exist to directly answer that question. It merely needs to make you more certain of the likely answer. You can make a guess about whether or not it was useful, carefully calibrating your guess based on your knowledge of similar scenarios, and then perform data analysis or measurement to improve the accuracy of your guess. If you’re not very certain of the answer, it doesn’t take much data or measurement to make you more certain, and thus increase your confidence in an outcome. However, the more certain you are, the more measurement you need to perform to increase your certainty.

Start by decomposing the problem

If you think what you want to measure isn’t measurable, Hubbard encourages you to think again, and decompose the problem. To use my example, and #1 on his list, I want to measure whether or not a documentation topic was more useful after I rewrote it. As he points out with his first point, the problem is likely more observable than I might think at first.

“Decompose the measurement so that it can be estimated from other measurements. Some of these elements may be easier to measure and sometimes the decomposition itself will have reduced uncertainty.”

I can decompose the question that I’m trying to answer, and consider how I might measure usefulness of a topic. Maybe something is more useful if it is viewed more often, or if people are sharing the link to the topic more frequently, or if there are qualitative comments in surveys or forums that refer to it. I can think about how I might tell someone that a topic is useful, what factors of the topic and information about it I might point to. Does it come up first when you search for a specific customer question? Maybe then search rankings for relevant keywords are an observable metric that could help me measure utility of a topic.

You can also perform extra research to think of ways to measure something.

“Consider your findings from secondary research: Look at how others measured similar issues. Even if their specific findings don’t relate to your measurement problem, is there anything you can salvage from the methods they used?”

Is it business critical to measure this?

Before I invest a lot of time and energy performing measurements, I want to make sure (to Hubbard’s second point in his list) that the question I am attempting to answer, what I am trying to measure, is important enough to merit measurement. This is also tied to points four, five, and six: does the importance of the knowledge outweigh the difficulty of the measurement? It often does, especially because (to his sixth point), the measurement is often easier to obtain than it might seem at first.

Estimate what you think you’ll measure

To Hubbard’s third point, a calibrated estimate is important when you do a measurement. I need to be able to estimate what “success” might look like, and what reasonable bounds of success I might expect are.

Make estimates about what you think will happen, and calibrate those estimates to understand just how uncertain you are about outcomes.

To continue with my question about a rewritten topic’s usefulness, let’s say that I’ve determined that added page views, elevated search rankings, and link shares on social media will mean the project is a success. I’d then want to estimate what number of each of those measurements might be meaningful.

To use page views as an example for estimation, If page views increase by 1%, it might not be meaningful. But maybe 5% is a meaningful increase? I can use that as a lower bound for my estimate. I can also think about a likely upper bound. A 1000% increase would be unreasonable, but maybe I could hope that page views would double, and I’d see a 100% increase in page views! I can use that as an upper bound. By considering and dismissing the 1% and 1000% numbers, I’m also doing some calibration of my estimates—essentially gut checking them with my expertise and existing knowledge. The summary of How to Measure Anything that I linked in the first paragraph addresses calibration of estimates in more detail, as does the book itself!

After I’ve settled on a range of measurement outcomes, I can assess how confident I am that this might happen. Hubbard calls this a Confidence Interval. I might only be 60% certain that page views will increase by at least 5% but they won’t increase more than 100%. This gives me a lot of uncertainty to reduce when I start measuring page views.

One way to start reducing my uncertainty about these percentage increases might be to look at the past page views of this topic, to try to understand what regular fluctuation in page views might be over time. I can look at the past 3 months, week by week, and might discover that 5% is too low to be meaningful, and a more reasonable signifier of success would be a 10% or higher increase in page views.

Estimating gives me a number that I am attempting to reduce uncertainty about, and performing that initial historical measurement can already help me reduce some uncertainty. Now I can be 100% certain that a successful change to the topic should show more than 5% page views on a week-to-week basis, and maybe am 80% certain that a successful change would show 10% or more page views.

When doing this, keep in mind another point of Hubbards:

“a persistent misconception is that unless a measurement meets an arbitrary standard….it has no value….what really makes a measurement of high value is a lot of uncertainty combined with a high cost of being wrong.”

If you’re choosing to undertake a large-scale project that will cost quite a bit if you get it wrong, you likely want to know in advance how to measure the success of that project. This point also underscores his continued emphasis on reducing uncertainty.

For my (admittedly mild) example, it isn’t valuable for me to declare that I can’t learn anything from page view data unless  3 months have passed. I can likely reduce uncertainty enough with two weeks of data to learn something valuable, especially if my uncertainty level is in relatively low (in this example, in the 40-70% range).

Measure just enough, not a lot

Hubbard talks about the notion of a Rule of Five:

There is a 93.75% chance that the median of a population is between the smallest and largest values in any random sample of five from that population.

Knowing the median value of a population can go a long way in reducing uncertainty. Even if you can only get a seemingly-tiny sample of data, this rule of five makes it clear that even that small sample can be incredibly valuable for reducing uncertainty about a likely value. You don’t have to know all of something to know something important about it.

Do something with what you’ve learned

After you perform measurements or do some data analysis and reduce your uncertainty, then it’s time to do something with what you’ve learned. Given my example, maybe my rewrite increased page views of the topic by 20%, something I’m now fairly certain is a significant degree, and it is now higher in the search results. I’ve now sufficiently reduced my uncertainty about whether or not the changes made this topic more useful, and I can now rewrite similar topics to use a similar content pattern with confidence. Or at least, more confidence than I had before.

Overall summary

My super abbreviated summary of the book would then be to do the following:

  1. Start by decomposing the problem
  2. Ask is it business critical to measure this?
  3. Estimate what you think you’ll measure
  4. Measure just enough, not a lot
  5. Do something with what you’ve learned

I recommend the book (with judicious skimming), especially if you need some conceptual discussion to help you unravel how best to measure a specific problem. As I read the book, I took numerous notes about how I might be able to measure something like support case deflection with documentation, or how to prioritize new features for product development (or documentation). I also considered how customers might better be able to identify valuable data sources for measuring security posture or other events in their data if they followed many of the practices outlined in this book.

So you want to be a technical writer

If you’re interested in becoming a technical writer, or are new to the field and want to deepen your skills and awareness of the field, this blog post is for you.

What do technical writers actually do?

Technical writers can do a lot of different things! People in technical writing write how-to documentation, craft API reference documentation, create tutorials, even provide user-facing text strings to engineers.

Ultimately, technical writers:

  • Research to learn more about what they are documenting.
  • Perform testing to verify that their documentation is accurate and validate assumptions about the product.
  • Write words that help readers achieve specific learning objectives and that capture what the writer has learned in the research and testing processes.
  • Initiate reviews with engineers, product managers, user experience designers, quality assurance testers, and others to validate the accuracy, relevancy, and utility of the content.
  • Advocate for the customer or whoever uses the product or service being documented.

The people reading what technical writers have produced could be using software they’ve purchased from your company, evaluating a product or service they are considering purchasing, undergoing a required process controlled by your organization, writing code that interfaces with your services, configuring or installing modifying hardware produced by your company, or even reviewing the documentation for compliance and certification purposes. Your goal, if you choose to accept it, is to help them get the information they need and get back to work as soon as possible.

Identify what you want from your career

Some general career-assessment tips:

  • Identify what motivates you and what challenges you.
  • Identify what type of team environment you want. These are loose descriptions of types of team environments that are out there:
    • A large highly-collaborative team with lots of interaction
    • A distributed team that is available for questions and brainstorming as needed, but largely everyone is working on their own thing.
    • A small team that collaborates as needed.
    • A team of one, it’s just you, you are the team.

Is technical writing a good fit for you?

  • Do you enjoy explaining things to other people?
  • Do people frequently ask you to help explain something to them?
  • Do people frequently ask you to help them revise content for them?
  • Do you care or enjoy thinking about how to communicate information?
  • Do you identify when things are inconsistent or unclear and ask people to fix it? (Such as in a UI implementation, or when reviewing a pull request)
  • Do you enjoy problem-solving and communication?
  • Do you like synthesizing information from disparate sources, from people to product to code to internal documentation?
  • Do you enjoy writing?

My background and introduction to technical writing

I started in technical support. In college I worked in desktop support for the university, wandering around campus or in the IT shop, repairing printers, recovering data from dying hard drives, running virus scans, and updating software. After graduation I eventually found a temp job working phone support with University of Michigan, managing to turn that position into a full-time permanent role and taking on two different queues of calls and emails. However, after a year I realized that was super exhausting to me. I couldn’t handle being “on” all day, and I found myself enjoying writing the knowledge base articles that would record solutions for common customer calls. I wrote fifty of them by the time I discovered a posting for an associate-level documentation specialist.

I managed to get that position, and transferred over to work with a fantastic mentor that taught me a ton about writing and communicating. After a few years in that position, writing everything from communication plans (and the accompanying communications), technical documentation, as well as a couple video scripts, I chose to move to California. With that came another set of job hunting, and realizing that there are a lot of different job titles that technical writing can fall under: UI writer, UI copywriter, technical writer, documentation specialist, information developer… I set up job alerts, and ended up applying, interviewing, and accepting an offer for a technical writing position at Splunk. I’ve been at Splunk for several years now, and recently returned to the documentation team after spending nearly a year working in product management.

Where people commonly go to technical writing from

Technical writers can get their start anywhere! Some people become technical writers right out of college, but others transition to it after their career has already begun.

As a technical writer, your college degrees doesn’t need to be in technical writing, or even a technical-specific or writing-specific field. I studied international studies, and I’ve worked with colleagues that have studied astronomy, music, or statistics. Others have computer science or technical communication degrees, but it’s not a requirement.

For people transitioning from other careers, here are some common starting careers:

  • Software developers
  • UX practitioners
  • Technical support

That’s obviously a short list, but again if you care about the user and communication in your current role, that background will help you immensely in a technical writing position.

Prepare for a technical writing interview

Prepare a portfolio of writing samples

Every hiring manager wants to see a collection of writing samples that demonstrate how you write. If you don’t work in technical writing yet, you might not have any. Instead, you can use:

  • Contributions you’ve made to open source project documentation. For example, commits to update a README: https://github.com/yahoo/gryffin/pull/1
  • How-to processes you’ve written. For example, instructions for performing a code review or a design review.
  • A blog post about a technical topic that you are familiar with. For example, a post about a newly-discovered functionality in CSS.
  • Basic task documentation about software that you use. For example, write up a sample task for how to create a greeting card in Hallmark Card Studio.

Your portfolio of writing samples demonstrates to hiring managers that you have writing skills, but also that you consider how you organize content, how you write for a specific audience, and the level of detail that you include based on that audience. The samples that you use don’t have to be hosted on a personal website and branded accordingly. The important thing is to have something to show to hiring managers.

Depending on the interviewer, you might perform a writing exercise in-person or as part of the screening process. If you don’t have examples of writing like this, that’s a good reason to track down some open source projects in need of some documentation assistance!

Learn about the organization and documentation

Going in to the interview, make sure you are familiar with the organization and its documentation.

  • Read up about the organization or company that you are interviewing with. If you can, track down a mission statement for the organization.
  • Find the different types of documentation available online, if possible, and read through it to get a feel for what the team might be publishing.
  • If the organization provides a service or product that you’re able to start using right away, do that!

All of these steps help you better understand how the organization works, what the team you might be working on is producing, and demonstrates to the interviewer that you are motivated to understand what the role and the organization are about. Not to mention, this makes it clear that you have some of the necessary skills a technical writer needs when it comes to information-gathering.

Questions you might want to ask

Find out some basic team characteristics:

  • How many other technical writers are at the organization?
  • What org are the technical writers part of?
  • Is there a central documentation team or are the writers scattered across the organization?
  • How distributed is the documentation team and/or the employees at the organization?

Learn about the documentation process and structure:

  • What does the information-development process look like for the documentation? Does it follow semi-Agile methods and get written and researched as part of the development team, or does information creation follow a more waterfall style, where writers are delivered a finished product and expected to document it? Or is it something else entirely?
  • Are there editors or a style guide?
  • Do the writers work directly with the teams developing the product or service?
  • What sort of content management system (CMS) is in use? Is it structured authoring? A static-site generator reliant on documentation files written in markdown stored next to the code? A wiki? Something else?

Find out how valuable documentation is to the organization:

  • Do engineers consider documentation vital to the success of the product or service?
  • Do product managers?
  • Do you get customer feedback about your documentation?
  • What is the goal of documentation for the organization?

Some resources for getting started with technical writing

Books to read

These books cover technical writing principles, as well as user design principles. None of these links are affiliate links, and the proceeds of the book I helped author go to charity.

  • The Product is Docs by Christopher Gales and the Splunk documentation team
    • Yes, I helped.
  • Every Page is Page One by Mark Baker
    • This book is a great introduction and framework for writing documentation for the web.
  • Developing Quality Technical Information by Michelle Carey, Moira McFadden Lanyi, Deirdre Longo, Eric Radzinski, Shannon Rouiller, and Elizabeth Wilde.
    • This book is a great resource and reference for detailed writing guidance, as well as information architecture.
  • Design of Everyday Things by Don Norman
    • The classic design book covers user-focused principles that are crucial to writing good documentation.

This is an intentionally short list featuring books I’ve found especially useful. You can also consider reading Scenario-Focused Engineering: A toolbox for innovation and customer-centricity, Nicely Said: Writing for the Web with Style and Purpose, Content Everywhere: Strategy and Structure for Future-Ready Content, Design for How People Learn, and Made to Stick: Why Some Ideas Survive and Others Die.

Articles and blogs about technical writing

I like following resources in RSS feeds to get introduced to good thinking about technical writing, but not all good content is new content! Some great articles that have helped me a lot:

Blogs to follow (intermittently updated)

Great articles about technical writing

Other web resources

Twitter is a great resource for building a network of people that care about documentation. If you use it, I recommend searching for people who commonly tweet with #writethedocs.

Write the Docs is a conference and community founded by Eric Holscher and maintained by a brilliant set of volunteers!

The Write the Docs Slack workspace is fairly active, and includes channels for job postings, career advice, as well as current discussions about trends and challenges in the technical writing world.

Some talks from the conference I recommend checking out are visible on YouTube:

There are playlists for 2018 (which I did not attend) and earlier years as well on YouTube, so dig around there and find some more resources too if watching videos is useful to you!

#tweetthedocs: Use Twitter to meet your users where they are

As a tech writer, it’s hard to tell how users get to your docs at all. They might be clicking on in-product help links, searching the web, or getting sent links from support. But you can get proactive about it too. Help users of your product get their questions answered by meeting them where they are—on social media sites like Twitter. You may already rely on marketing, sales, support, and search engines to bring users to your documentation, but social media is a direct option. You can tweet about anything from general topics that answer common user questions to drier topics that are important for people to know. Read on to learn how!

Continue reading