#0036: Introduction of a website changelog

#0036: Introduction of a website changelog

Preamble

I initially launched this website in 2020-02, with nothing but the default wordpress “coming soon” page to show for itself. Then after some months (2020-06) I finally managed to get round to actually publishing content onto it. And ever since then, I have been in a continual (albeit sporadic and intermittent) process of revision and iterative improvement.

So after about a year and a half of having this website online and functioning as intended: I now see the potential need for a master changelog here. A public facing log that will record all notable changes made to the website.

Utility of a public facing website changelog

Although this changelog will record all noteworthy changes made to the website as a whole. I specifically see the utility of this website changelog, with regards to noting changes made to the content of this blog’s articles themselves; rather than the website at large.

What I mean by this, is that it is more important to note post publication changes made to the content of articles. Than it is noting change in the wider website as a whole. This is because most changes made to the wider website outside of the articles hosted here will likely be concerned with aesthetics. Such as an addition/subtraction of a decorative graphic.

Simply put: they are less substantially important changes in terms of affecting the value proposition of the website itself. Although I do intend to note these types of things as well. At least for the most part. Although very minor website changes of this kind are unlikely to be noted.

That being said: I should restate the main function of this changelog. It is to note changes made to article content. This is because this blog’s articles (or blog posts) is where it’s primary (or most substantive) value is as a website. I.e. the primary reason a person may visit this site is to read the articles. This logging will allow readers to avoid confusion when/if they visit an article that they have already read, only to find that the content has to some degree changed within the interim.

My M.O. regarding editing published articles

Once an article has completed the development process and is finally published, I tend to have a habit of coming back to tweak and change things after the fact. This tends to happen some time time after the article is published when I have had sufficient time to rest and cool off on the topic. At this point I am usually more fresh minded, and hence I am more apt to find better methods to get my point across, as well as to spot any residual errors previously missed.

I am of the mind that I should also chronicle these changes as I make them. This is in order to avoid a sense of revisionist history. One caused by the absolute erasure of any mistakes such as: erroneous calculations, half-witted conclusions, or simple misinformation. I admit I am prone to getting it wrong a lot of the time. Especially when it comes to speculations made with limited observations, or ones unfortunately coloured with personal biases.

With that in mind, I should take a moment to state clearly the nature of this website in order to eliminate any misunderstandings or confusion as to the nature of this publication. As the name should suggest; this website is literally “a tinkerer’s blog”. The articles held therein are presented not as an authoritative source of information, but rather my (and only my) best understanding of any particular subject at the time. Complete with grammatical mistakes, spelling arrows, personal experience, and biases; as well as good ol’fashioned human ignorance and incompetence.

Although I (think I) do my due diligence in researching for articles; as well as re-reading my work several times over before publishing. This is in order to (give myself the opportunity to) catch any and all errors that I can. Unfortunately, often at that time: my mind can become exhausted with the subject matter, and would rather move onto to something else. Anything else! (Maybe even a refreshing punch to the testicles.) Add to that time pressures such as work and scheduled commitments. Well. They all add up; pushing me to hit the publish button perhaps earlier than I otherwise should.

Hence in an article’s final proofreading and finishing edits stage – I tend to find myself skimming sentences; or simply unconsciously mentally correcting the text grammatically and/or semantically. I.e. I knew what I meant by what I wrote, although I left the text in a state where it’s messaging is either ambiguous, nonsensical, and/or open to multiple unintended interpretations. Often I miss mistakes because of this and only really find them after I had some time to ‘cool off’ on the subject, as it were.

So that’s what normally happens with any given article. Post publish edits and refinements seem like a standard protocol for me. I even have a small to-do list regarding edits I need to make to past articles.

For example: my review of the video game “Princess Remedy: In a world of hurt”, has no critic of the game’s soundtrack. I somehow just completely forgot to mention it at all. I just blotted the concept of it out of my mind at the time of writing. So at this point: I intend to go back and insert this into it at a later date.

The thing is: I don’t like the idea of this additional content to suddenly one day appear within that article apropos of nothing, and masquerade like it has always been there from the beginning. I’ll leave that revisionist M.O. to the articles on political/activists news websites.

Hence, I need some way to communicate across to the audience that it is an add-on edit. In the past I solely used what I call an “update tag”. I’d insert a set of square brackets featuring the date before the add-on segment. Basically this: “[UPDATE: 2022-0X-XX] The music is …”. In the future I think I will use either the changelog alone to note smaller changes, and both the changelog as well as an in-article update tag for larger updates. Such as an entire additional segment to an article.

RSS

Just as an aside: if for whatever reason you want the raw undoctored initial publications. Free of my post publish meddling that is. Then please subscribe to my RSS feed. As it sends you the articles as they are published, and doesn’t update the content after that initial data transfer.

To do this, copy the below link into your RSS aggregator of choice:
https://www.tinkerersblog.net/rss

Closing thoughts

Although as stated the main reason for a changelog is for logging post publication article edits, it will also be good for keeping track of more general activity around the website. Things such as when new manual scans are added, or which pages have been recently edited. It’ll give the readership insight into where my attention regarding the website has been recently. Allowing them a sense of the frequency and general trajectory of my activities here. Which would be useful / hold value to anyone interested. (If there really is anyone interested … that is.)

I think it’ll add real utilitarian value to the website. But we’ll see exactly how much once it is actually implemented, and had some time to operate. Theoretically there is no reason as to why a website shouldn’t have a changelog. I mean it is a software product with ongoing development just like any other. However, I do wonder why so few other websites actually do have a public facing changelog.

It could be something as simple as a public changelog not truly being a necessity. Or it could be the fact that it would bring a level of perhaps unwanted transparency to their website. I mean it’s hard to simply vanish things, if you have a policy of documenting changes. I guess you could just not document the vanishing of the undesirable content, but still document the more mundane changes made. Although that does undermine the utility of the tool.

If I were pushed to give an answer: I’d say that most people just don’t want the work of it. For example: with for-profit websites tending to streamline their overheads (i.e. cut costs wherever they can), coupled with the continual communication and co-ordination between multiple levels of staff required for implementing and routinely updating a changelog: they most likely wouldn’t want to bother with one. Especially since there is little in the way of returns in terms of profit for the work necessary.

Even single owner general hobbyist websites probably wouldn’t bother with one either. As the single operator likely focuses their efforts on documenting their actual hobby activities. Rather than developing the website itself. I’d imagine that this is especially true in cases where the subject of their hobby or activity is unrelated to technology.

So unlike with this website, there’d be no on-topic value to discussing website development as a subject. Such as a website documenting: a homestead, hobby farm, painting miniatures, religious education, or bodybuilding to name a few. Basically any website where discussing the website itself is unrelated to the core subjects of the website… Website.

That’s all really. Changelog incoming. (Actually its already here, this article is a month late. :D)
Thank you for reading.

Term Glossary


RSS – Really Simple Syndication
M.O. – Modus Operandi (mode of operation)

Links, references, and further reading


https://en.wikipedia.org/wiki/Rss


https://en.wikipedia.org/wiki/Changelog

#0016: Software recommendation: Firefox Monitor and haveibeenpwned?

#0016: Software recommendation: Firefox Monitor and haveibeenpwned?

https://monitor.firefox.com/

https://haveibeenpwned.com/

Preamble

In a bid to make more immediately useful content, I’d like to start recommending some of the various tools that I use. In this case it is an online service. Namely Mozilla’s Firefox Monitor; or more to the point, it is actually the website: haveibeenpwned.com (HIBP), which Firefox Monitor uses to enable it’s service.

What do they do?

In essence Firefox Monitor and HIBP are used to check whether or not an email address is associated with a recorded data-breach. Keyword: “recorded”. It does this by using a database of known breaches provided by haveibeenpwned.com.

The purpose of this service is to allow people to ascertain whether or not, an online account (and the user information there in) associated with the email address: has been compromised in a known data breach; and thus in need of immediate remedy. Things like: changing passwords, recovery phrases, and generally being aware that any potentially sensitive information associated with that account, such as: full name, mother’s maiden name, GPS location, education, birth date, telephone, city, school, or business information has now circulated within the hacker community.

Additionally, it helps to know which company is to blame for the spike in volume of spam and phishing emails, that will most certainly accompany said breach. I don’t know about yourself, but that’s something I’d certainly like to know.

Why is this service important?

It is my belief that every solution begins with awareness, the awareness of the problem. Only then can we move to better the situation. This tool gives you exactly that.

In my opinion, the main reason why I think this tool is important is because the companies involved in the data breaches themselves are loath to make their customers aware of them. Even though it is in their user’s best interests; it is not in the businesses best interests to advertise any breaches beyond the legally mandated/enforced minimum. Furthermore, who knows what that actually even is when dealing with global or multinational companies that operate over many legal jurisdictions. This is especially true when dealing with larger companies with entire legal teams at their disposal.

This service is important because (still just my opinion): companies in general tend to quietly patch any security vulnerabilities as they find them, and move on hoping no-one has noticed. This is especially true when there is no internally confirmed security breach.

Whenever a confirmed breach does happen, the first thing that the company responsible does is downplay the scope and severity of it. This may (and probably does) include: not even publicly reporting the breach until it is already made public elsewhere, often at a much later time. In many cases there is even resistance to acknowledge fault after the breach is made public. This is most likely a bid to exonerate themselves of any potential legal liabilities involved.

At the very least acknowledgement of fault could be seen as weakness. Weakness that will shake public confidence in the company and/or service. Therefore it is in their best interest to maintain the general illusion of control and/or competence. It’s corporate PR 101. It’s just a shame that the company and it’s users’ interests don’t align within this circumstance.

Why should people use these tools?

Both Mozilla Firefox Monitor and HIBP are free to use publicly available tools. Both tools come from reasonably trusted sources. Firefox Monitor is the product of an open-source community driven effort, giving it a certain level of transparency. And HIBP was developed by Troy Hunt, an authority on the topic of digital security. Even if you don’t know who Mr Hunt is (and I didn’t prior to this post), the fact that the Mozilla team decided to use his HIBP database for Firefox Monitor means that they are vouching for it.

More importantly, the tools themselves can assist an individual with regards to protecting their personal information online. They do this by allowing the individual that exact thing that I mentioned earlier: awareness. Awareness of whether or not that person’s email associated account information has been circulated, and which company is at fault for it.

For example: if you used the tool and because of it now know that, an account associated with your email with company X has been breached; and along with that breach your “security questions” were revealed. Then now you know to both remove, and not to use those particular security questions, with any future account … ever. As they are basically permanently compromised. Forewarned is forearmed.

taken from https://github.com/mozilla/blurts-server

Difference between Firefox Monitor and haveibeenpwned?

Firefox monitor is a very slimlined version of the HIBP tool that gives the lay user just what they need, without overwhelming or putting off said lay user. It is rather idiot proof; merely requiring user’s to input their emails and press enter. That’s it. Firefox monitor also has been bundled in with a few basic articles on good security protocol, that may be helpful to the average user. Common sense stuff a lot of it, but you know what they say about common sense.

Although Firefox is the simpler tool to use, it must be said that HIBP is a far more robust tool. And the one that I recommend. This is because in addition to searching email addresses, it allows searching via: passwords, and domain names. The website also allows users to browse a catalogue of breached websites without running a search. Extracts below.

Ever wondered how many accounts have been breached because they used the password “love”? Wonder no more. According to HIBP, its 356006 times.

I have also perused a nice little selection of companies from HIBP’s catalogue of known breaches that you may find interesting.

Personal experience with a data breach.

Just an aside if anyone is interested. From reading the above “Why is this service important?” section, you might have gotten the idea that I may be ever so slightly cynical about the companies involved in security breaches like these.

Frankly speaking, whenever data breaches do happen, I do not consider the corporations involved to be “victims” of cybercrime, as many others seem to do. It is a nauseating sentiment. One that condones bad behaviour. This is because it is my personal belief that the vase majority of the cases are due to one core thing: a dereliction of duty. Them failing in their duty to protect the data that they collected. Little more.

In addition to consuming the various news articles about data breaches over the years. Ones that had the general themes of corporate incompetence. Like for example: employees carrying around sensitive data on unencrypted thumb-drives, only to lose them on the train. I also have a few examples of companies that leaked my very own personal information. All of this has coloured my opinions thus.

The most memorable is the online virtual tabletop gaming website roll20.net. The thing that rubbed me the wrong way about them is that at no point during the process did they ever take any accountability for allowing it to happen. They did eventually outline what information was taken, but they never offered an apology for their lax in security. Instead they covered it up with boiler plate (legal friendly) corporate speak.

Example: “The investigation identified several possible vectors of attack that have since been remedied. Best practices at Roll20 for communications and credential cycling have been updated, with several code library updates completed and more in development.” Assuming that is indeed true, the same could literally be said by any company involved in a similar data breach – just change the names.

Although from what I understand by reading the article that they linked in their post, technically (purely technically) this appears as though it’s not their fault. But rather it was due to the underlying technology that they used. At least that is the implication presented. I’d argue that they still made the decision to use said tech, and thus vouched for it by doing so. Making them responsible, at least tangentially. At least enough for a simple sorry. The closest their customers got to an apology was a “Frankly, this sucks.” Writing it in an official company blog post that they passed for a conclusive public report; authored by Jeffrey Lamb, the Data Protection Officer.

I remember thinking at the time that whoever was writing this was good at the bland formalities of corporate speak, but otherwise is (and excuse my French): a fucking dickhead. You have to keep in mind reader, that they only knew of their own data breach because of a third party report. One that was published months after the fact. The report was published in February of 2019, and the breach happened (according to Mr Lamb) sometime late 2018. No apology warranted, not even for missing the hack, until a third party told you about it months after the fact. They then go on write their conclusive report in august of 2019. So nearly a year, between data breach and the final public debrief, where they outline exactly what data was exposed. I call that incompetence. “Data Protection Officer” more like resident salary sucker.

The ultimate lack of accountability is what really rubbed me up the wrong way here. And why would they be accountable, there is little in the way of consequence it seems for these messes. There are even examples of customers defending roll20 in the comments, referring to them as “victims” of cybercrime. They aren’t the victims here idiot, you are! I’ll include some choice examples of this for your entertainment. Its customers like that, that make businesses feels like they don’t have to be accountable either for their actions, or in this case general inaction with regards to proactively protecting customer data. Please read through the example comment thread.

You really can’t reason with people like that. They have too much emotional stock in a corporation to admit to themselves that they got screwed by it. There were even people actually praising roll20 for it’s meagre efforts. A sum total of 2 blog posts, some notice tweets/emails, and for patching a hole in their own boat. Thanks roll20, stellar job. Shame about all my cargo sinking to the seafloor for the bottom feeders to enjoy. I mean you only lost my full name, my IP address (so my physical location), my password, oh and some of my credit card data. Don’t worry about that roll20 (not like you would), that’s my problem. Fuck those types of customers. Wankers.

Moving on. Another example of a gormless entity losing my data is ffshrine.org. A final fantasy fan site that I registered with in 2010 I believe; and haven’t used that account since 2010. Ideally, they would have flagged the account as non-active and deleted it after a couple of years. But alas, instead they just kept whatever details I gave them for the five years until their 2015 data breach. Where they lost subscriber passwords and email addresses. No warning email post event, nothing. Radio silent. I had a similar experience with tumblr back in the day. Radio silent. No accountability. Are you sensing a theme here, dear reader?

Closing thoughts.

I have written far more here then I initially wanted to, so I will keep this summary short. Tools like haveibeenpwned and Firefox Monitor are things that you as an individual can use to help protect yourself in cyberspace. They can help you take proactive measures to safeguard your own data. They can also show you evidence that the large corporations really aren’t as professional or as infallible as they like to appear.

And that when, they make mistakes; mistakes such as losing your data. It is often you that has to bare the brunt of the repercussions, with little if any repercussions to them. Maybe they incur a temporary stock dip. But the fact of the matter is, they’ll recover from it. However whatever data you provided them for safe keeping, well that’s now permanently out there. Enjoy.

For example. To this day I still get phishing emails that say something like: “hey MY_FULL_NAME, YOUR_BANK has detected multiple login attempts using PASSWORD_FROM_FFSHRINE.ORG to login. We have frozen your account because we suspect fraudulent activity. Follow the obviously dodgy link provided and give us your security questions to fix this.” Although I can recognise a phishing scam when I see one, many technology illiterate users can not.

And make no mistake, the companies that were lax in their security. The one’s that have the attitude that breaches happen; are the exact ones to blame for the perpetuation of the black market information economy. An economy that preys on people; the real victims. The people who trusted these corporations with their data, thinking it in safe hands. Not the corporations themselves whose lack of diligence and general incompetence allowed for the data that they were trusted with to be exposed.

Jeez… that got a bit preachy towards the end. Didn’t it? Sorry about that. It’s just seeing companies fobbing off their responsibilities, and then seeing customers with Stockholm syndrome defending these same companies against criticism – really ruffles my feathers.

Anyway, thanks for reading.

References, links, further reading.

https://github.com/mozilla/blurts-server

https://monitor.firefox.com/

https://monitor.firefox.com/breaches

https://monitor.firefox.com/security-tips

https://haveibeenpwned.com/

https://haveibeenpwned.com/About

https://feeds.feedburner.com/HaveIBeenPwnedLatestBreaches

https://blog.roll20.net/post/182811484420/roll20-security-breach

https://blog.roll20.net/post/186963124325/conclusion-of-2018-data-breach-investigation

Hacker who stole 620 million records strikes again, stealing 127 million more

#0001: On creating a website

#0001: On creating a website

image depicting "w w w ."

I find myself sitting here at a loss as to what topic I should go with for my first article on this site. It needs to be something interesting, and more importantly this first post will set the standard for those to come; so it needs to be good.

If you read the title, you probably can guess what topic I chose. Yeah, after spinning it in my head for a while; I decided just to go with how this site came into existence. This is after all supposed to be a technical blog (of sorts), so it seems fitting that we start with the basic technology of this website itself.

So what is it that you actually need to create a website? Well, like most questions in life, the answer is: it depends. In this case I needed three things to get up and running: 1) a domain name, an official registered name for my website; 2) a site-builder, software to help me make the thing; and finally 3) a host, some always-online servers to hold the code and contents of the site, these are the computer(s) that users will connect to when they visit the website.

image depicting logos of HTML5, JS, and CSS3

Initially I thought that creating a website would be no sweat. Just get a domain name, get a host, and hash out something in HTML, JavaScript, and CSS (the holy trinity!). No builders necessary. No worries. It should take me exactly “one weekend” to do this. Right?

Well, unfortunately no. I think I fell victim to my own hubris, or more accurately the Dunning-Kruger effect. There was and is so much more to the process that I was unaware of, that I actually thought it’d be straight forwards and easy. Having said that however I should outright state; that yes, creating a web site has never been simpler or easier for the uninitiated. With site builders and turn-key solutions (like wordpress.com or squarespace.com for example); that largely abstract out all the mechanical technicalities, into simple graphic interfaces that a non-technical person can intuitively operate. A good real-world example customer for these would be an artist creating an online portfolio of their works.

logo of squarespace.com

These solutions however did not particularly interest me much, as I am interested in the technicalities of the actual infrastructure of the website itself. This I found to be something that is largely abstracted out of relevance by these public facing and user friendly interfaces. Another concern I’d like to voice is that, although these company services do make it very straight forward to get an online presence. They do however charge you for every step on the way, and at the end of it you may end up with something that you don’t exactly want and have spent money on it to boot.

Examples include: purchasing a packaged feature-set that after gaining some experience you realise that you have no use for; or in a bid to save money: purchasing the cheapest packages available then realising after the fact, that your use-case requirements are in excess of the service package’s limitations.

This was one of my primary concerns, and as a result caused me to be very cautious when selecting something out of the numerous and quite frankly somewhat overwhelming options. So many companies, site-builders, hosts, and all the packages and deals that they use and offer. So, after some time being put off from pulling the trigger on anything in particular; then procrastinating (naturally). I finally decided to write up a clear criteria of exactly what I wanted.

I always find that when venturing into the unknown (a bit melodramatic granted), it pays to have a plan, a goal, a list of objectives, a criteria, whatever you want to call it. I have also found it most effective for that plan to be concise in nature and hierarchal in structure. Id est: a numbered list.

So here’s mine:

  1. I wanted a basic website for blogging and a light hosting of files. It will consist predominantly of written articles, image rich guides, as well as to host small to medium files and programs of my own creation (<100MB each). This means the storage size needs to be in excess of 50GB. All those files and HD pictures add up quickly.
  2. I wanted my own unaffiliated domain name. It looks more professional in my opinion. For example: mywebsitename.net instead of something like: mywebsitename.wordpress.com or mywebsitename.googlesites.com.
  3. I wanted the site to be adequately secure against malware, spam, and intrusion with minimal intervention on my part. In other words I wanted to be hands off when it came to securing my contents. I need good ready on-hand security without having to divert my time and efforts into a rabbit-hole of research, at least for now.
  4. And finally, and most importantly. I wanted to be able to have this whilst maintaining a degree of privacy. I want an online presence without freely advertising my personal information to the world at large.

So that’s it: basic small bloggers site, with it’s own name, adequate hosting, some protection, some privacy, and with a comfortable storage limit. Obvious right, well not so in my experience. There is merit in writing down the obvious and enumerating it. It brings it to the front and centre and adds it to an objective hierarchy that one can work from.

In the end, after a frustrating period of paralysis via analysis, and exploring a multitude of different options on the market; I decided to just take the shortest route to my goal. Perhaps not the best route for my personal use-case, but that is the kind of thing one sees with experience and hindsight. So I decided to pick a reasonable option and just jump in and see how it goes.

And that is exactly what I did; I ended up going with wordpress.org as my choice of website builder. Three main reasons: one, it’s ubiquity – it is well known, well used, and well documented. So any issues I may come across, chances are good that someone else has, and probably documented a solution to boot. The second reason I liked WordPress, was because of it’s open-source and community driven nature. This makes it versatile meaning if there is a particular feature I wished for, chances are that someone else has, and has a documented implementation of it somewhere. Lastly, the third reason is simple, it is free. This allowed me to tinker with it without any financial investments.

As to why I didn’t just build the website out from source myself. Well beyond making a basic website consisting of static webpages linked together, this was beyond my skill-set and interest level at this time if I am honest. I wasn’t willing to spend the time and effort to learn to implement every little feature that I wanted for it. This could include anything from animated drop-down menus, to allowing user comments, or embedding videos within articles. It would have required more of a personal investment than I was willing to put in at the time; especially since I just wanted something useable and customisable to be up and running in a timely fashion. That, and I couldn’t justify taking time from other projects and responsibilities, the reward to work ratio wasn’t sufficient.

image depicting wordpress.org logo next to wordpress.com logo

Please note, there is a distinct difference between wordpress.org and wordpress.com. WordPress.org is just the open source website builder software. Whereas wordpress.com is a company that bundles in the website builder with their own hosting and support services. They are not the same entity.

Next up, hosting. This one is quite simple since as far as I know, one competent host is as good as another. I went with bluehost.com since they were recommended from wordpress.org via affiliate links. Their prices for what I wanted were also reasonable. Funnily enough, I got my domain name via bluehost’s partners. So it was a case of choosing WordPress and being funnelled to affiliates and partner’s services for the rest. It made things simple and since I actually have little experience in setting up websites; I was more concerned with not using the “wrong” company (dodgy or otherwise) or “wrong” tools (wasting time learning inferior tool sets – been there done that…), than I was concerned with choosing the best deal. As long as what I got was what was advertised, and what was advertised was good enough to get started.

With the WordPress optimised setup, I ended up with a “shared webhost service”. Essentially my website would be sharing the server with many others like it. This is because a simple WordPress website doesn’t really need anything that requires dedicated hardware. For example: large processing capabilities for online gaming. This website wasn’t going to be folding proteins or using their server for automated stock trading or anything like that. It also didn’t need a large reservoir of storage space available, since it wasn’t an archival or file hosting website.

The “shared webhost service” is the one of the cheaper options available. Others include a “virtual private server”, this is a mid-tier option allowing the subscriber to have a virtual server with it’s own allotted RAM and CPU usage. Additionally, and probably the most expensive option available is renting a dedicated server. Its exactly what it sounds like, the subscriber just rents a box dedicated to just them; and consequently they can have complete control over it. The latter two mentioned here are overkill for this humble hobbyist’s blog. They are more appropriate for the other examples I mentioned above.

While I was going through the processing of setting up the host; and looking through their various options; two features/services that they offered popped out to me, and I would like to highlight them for you. The first is “domain privacy” and the second is that of “SSL Certificate”.

Domain Privacy. This basically allows you to own the website without having your personal information plastered all over whois.com (or who.is, or whois.net, or what have you). Websites that comb website registrars for ownership information. As the newly minted website owner, your contact information would be listed there. Alarmingly, this includes: your full name, address! and any contact numbers or email addresses you provided whilst registering the site. It should also be noted that lying on the registration is apparently punishable by law (I read that somewhere during the signing process, but I can’t find a direct reference or link stating that. Apologies). Unfortunately I can’t actually speak to exactly where it applies, and what kind of punishment.

This is not really a problem for a business, with its own legal identity and premises; however it most certainly a problem for the private individual. If you then purchase “domain privacy” from the hosting or registering company, this will result in them acting as a mediator and using their information as a substitute for yours on these public listing.

I believe different countries have different laws regarding public displays of website owner’s personal information, via sites like who.is. Some permit it by default, others favour the owner’s privacy and don’t allow the display by default. My concern here is, although I live in a country that [with regards strictly to this] favours the individual’s privacy, the company that I’m doing business with is in another country, one that does not. Whose countries laws takes priority? Is seemed like the more prudent thing for me to do was to just purchase this service, rather than leave it to chance.

The second feature of note I encountered, is that of a “SSL certificate”. Put simply this gives the website a certificate of authenticity. This is issued from some association charged with verifying that the websites users connect to are who they say they are. In addition this allows secure connections to site servers using the HTTPS protocol. This is important to me as I know I am rather reticent to visit many non-https websites, especially since many modern browsers such as Firefox (circa 2020), warn users who connect to non-https or unsecured websites. Its just another layer of security to take advantage of. One that grants a level of authenticity.

image depicting orange "CPanel" logo

Moving on. It should be noted that when I purchased the hosting I also got the ability to setup my own emails system using cPanel. cPanel is a general control panel for web hosts that enables you to control all the various programs for your website. In this case an email client. I decided to go with one central email address (mail@tinkerersblog.net) using the webmail service and the roundcube mail client. There are other options of email client available, including: Horde and squirrelmail.

That’s basically the process from start to finish. At this point I have a basic WordPress template site and email. All that was left was to customise it to my liking and create content. In retrospect, the biggest hurdle for me was the over-abundance of choice on the market. It required an exhaustive process of researching and vetting the various services, and options available there-in. Other than that, once you have chosen a particular company, host, or web-builder you like; they tend to do a good job of keeping you in their ecosystem for the rest of the things you need to get set up. They do this predominantly via affiliate links and discounts.

The funny thing is now that I am committed and have paid approximately £180 for a 3 year deal for this site; I still think that maybe I could have found something better. A particular tool-set, or cheaper deal that would suit me more, maybe I invested into a bad company or technology, or perhaps I made a mistake when buying optional add-ons? Who knows? I guess I will, later. Still though, it is a nagging feeling that lingers on after the decision has been made.

Besides, the primary objective was to get started and that has been done. It’s only really with hindsight that I can make better decisions for my particular use-case; and that comes later. I guess that Steven Wright quote holds water here: “Experience is the thing you get just after you needed it” … I am paraphrasing. Anyway, That’s all for my musings.

Thank you for reading.

Sources / References / Further reading:


https://whois.icann.org/en/domain-name-registration-process
https://whois.icann.org/en/about-whois
https://en.wikipedia.org/wiki/WHOIS
https://en.wikipedia.org/wiki/CPanel
https://en.wikipedia.org/wiki/Webmail
https://en.wikipedia.org/wiki/Roundcube
https://en.wikipedia.org/wiki/SquirrelMail
https://en.wikipedia.org/wiki/Horde_(software)
https://en.wikipedia.org/wiki/Domain_privacy
https://en.wikipedia.org/wiki/HTML
https://en.wikipedia.org/wiki/Cascading_Style_Sheets
https://en.wikipedia.org/wiki/JavaScript
https://www.dreamhost.com/blog/wordpress-differences-beginners-guide/