#0036: Introduction of a website changelog

#0036: Introduction of a website changelog

Preamble

I initially launched this website in 2020-02, with nothing but the default wordpress “coming soon” page to show for itself. Then after some months (2020-06) I finally managed to get round to actually publishing content onto it. And ever since then, I have been in a continual (albeit sporadic and intermittent) process of revision and iterative improvement.

So after about a year and a half of having this website online and functioning as intended: I now see the potential need for a master changelog here. A public facing log that will record all notable changes made to the website.

Utility of a public facing website changelog

Although this changelog will record all noteworthy changes made to the website as a whole. I specifically see the utility of this website changelog, with regards to noting changes made to the content of this blog’s articles themselves; rather than the website at large.

What I mean by this, is that it is more important to note post publication changes made to the content of articles. Than it is noting change in the wider website as a whole. This is because most changes made to the wider website outside of the articles hosted here will likely be concerned with aesthetics. Such as an addition/subtraction of a decorative graphic.

Simply put: they are less substantially important changes in terms of affecting the value proposition of the website itself. Although I do intend to note these types of things as well. At least for the most part. Although very minor website changes of this kind are unlikely to be noted.

That being said: I should restate the main function of this changelog. It is to note changes made to article content. This is because this blog’s articles (or blog posts) is where it’s primary (or most substantive) value is as a website. I.e. the primary reason a person may visit this site is to read the articles. This logging will allow readers to avoid confusion when/if they visit an article that they have already read, only to find that the content has to some degree changed within the interim.

My M.O. regarding editing published articles

Once an article has completed the development process and is finally published, I tend to have a habit of coming back to tweak and change things after the fact. This tends to happen some time time after the article is published when I have had sufficient time to rest and cool off on the topic. At this point I am usually more fresh minded, and hence I am more apt to find better methods to get my point across, as well as to spot any residual errors previously missed.

I am of the mind that I should also chronicle these changes as I make them. This is in order to avoid a sense of revisionist history. One caused by the absolute erasure of any mistakes such as: erroneous calculations, half-witted conclusions, or simple misinformation. I admit I am prone to getting it wrong a lot of the time. Especially when it comes to speculations made with limited observations, or ones unfortunately coloured with personal biases.

With that in mind, I should take a moment to state clearly the nature of this website in order to eliminate any misunderstandings or confusion as to the nature of this publication. As the name should suggest; this website is literally “a tinkerer’s blog”. The articles held therein are presented not as an authoritative source of information, but rather my (and only my) best understanding of any particular subject at the time. Complete with grammatical mistakes, spelling arrows, personal experience, and biases; as well as good ol’fashioned human ignorance and incompetence.

Although I (think I) do my due diligence in researching for articles; as well as re-reading my work several times over before publishing. This is in order to (give myself the opportunity to) catch any and all errors that I can. Unfortunately, often at that time: my mind can become exhausted with the subject matter, and would rather move onto to something else. Anything else! (Maybe even a refreshing punch to the testicles.) Add to that time pressures such as work and scheduled commitments. Well. They all add up; pushing me to hit the publish button perhaps earlier than I otherwise should.

Hence in an article’s final proofreading and finishing edits stage – I tend to find myself skimming sentences; or simply unconsciously mentally correcting the text grammatically and/or semantically. I.e. I knew what I meant by what I wrote, although I left the text in a state where it’s messaging is either ambiguous, nonsensical, and/or open to multiple unintended interpretations. Often I miss mistakes because of this and only really find them after I had some time to ‘cool off’ on the subject, as it were.

So that’s what normally happens with any given article. Post publish edits and refinements seem like a standard protocol for me. I even have a small to-do list regarding edits I need to make to past articles.

For example: my review of the video game “Princess Remedy: In a world of hurt”, has no critic of the game’s soundtrack. I somehow just completely forgot to mention it at all. I just blotted the concept of it out of my mind at the time of writing. So at this point: I intend to go back and insert this into it at a later date.

The thing is: I don’t like the idea of this additional content to suddenly one day appear within that article apropos of nothing, and masquerade like it has always been there from the beginning. I’ll leave that revisionist M.O. to the articles on political/activists news websites.

Hence, I need some way to communicate across to the audience that it is an add-on edit. In the past I solely used what I call an “update tag”. I’d insert a set of square brackets featuring the date before the add-on segment. Basically this: “[UPDATE: 2022-0X-XX] The music is …”. In the future I think I will use either the changelog alone to note smaller changes, and both the changelog as well as an in-article update tag for larger updates. Such as an entire additional segment to an article.

RSS

Just as an aside: if for whatever reason you want the raw undoctored initial publications. Free of my post publish meddling that is. Then please subscribe to my RSS feed. As it sends you the articles as they are published, and doesn’t update the content after that initial data transfer.

To do this, copy the below link into your RSS aggregator of choice:
https://www.tinkerersblog.net/rss

Closing thoughts

Although as stated the main reason for a changelog is for logging post publication article edits, it will also be good for keeping track of more general activity around the website. Things such as when new manual scans are added, or which pages have been recently edited. It’ll give the readership insight into where my attention regarding the website has been recently. Allowing them a sense of the frequency and general trajectory of my activities here. Which would be useful / hold value to anyone interested. (If there really is anyone interested … that is.)

I think it’ll add real utilitarian value to the website. But we’ll see exactly how much once it is actually implemented, and had some time to operate. Theoretically there is no reason as to why a website shouldn’t have a changelog. I mean it is a software product with ongoing development just like any other. However, I do wonder why so few other websites actually do have a public facing changelog.

It could be something as simple as a public changelog not truly being a necessity. Or it could be the fact that it would bring a level of perhaps unwanted transparency to their website. I mean it’s hard to simply vanish things, if you have a policy of documenting changes. I guess you could just not document the vanishing of the undesirable content, but still document the more mundane changes made. Although that does undermine the utility of the tool.

If I were pushed to give an answer: I’d say that most people just don’t want the work of it. For example: with for-profit websites tending to streamline their overheads (i.e. cut costs wherever they can), coupled with the continual communication and co-ordination between multiple levels of staff required for implementing and routinely updating a changelog: they most likely wouldn’t want to bother with one. Especially since there is little in the way of returns in terms of profit for the work necessary.

Even single owner general hobbyist websites probably wouldn’t bother with one either. As the single operator likely focuses their efforts on documenting their actual hobby activities. Rather than developing the website itself. I’d imagine that this is especially true in cases where the subject of their hobby or activity is unrelated to technology.

So unlike with this website, there’d be no on-topic value to discussing website development as a subject. Such as a website documenting: a homestead, hobby farm, painting miniatures, religious education, or bodybuilding to name a few. Basically any website where discussing the website itself is unrelated to the core subjects of the website… Website.

That’s all really. Changelog incoming. (Actually its already here, this article is a month late. :D)
Thank you for reading.

Term Glossary


RSS – Really Simple Syndication
M.O. – Modus Operandi (mode of operation)

Links, references, and further reading


https://en.wikipedia.org/wiki/Rss


https://en.wikipedia.org/wiki/Changelog

#0032: Instructions on digitising physical documents

#0032: Instructions on digitising physical documents

Preamble

This will be a quick guide to anyone who may be interested in creating their own digital archives of physical documents. Although there are undoubtedly any number of different ways to achieve this task: I only intend to show you one method. The method that I specifically use (at the time of writing) in order to create, label, modify, and archive document files. Files such as the ones hosted on this website’s “Device Document Scans” page.

Hyperlink: https://www.tinkerersblog.net/device-document-scans

Tools and equipment

Hardware:

  • flatbed scanner
  • personal computer

Software:

  • Linux Mint (operating system)
  • Bash terminal (TUI program for accessing other TUI programs)
  • simple-scan (GUI scanning program)
  • GIMP (GUI WYSIWYG image manipulation program)
  • ImageMagick convert (TUI image manipulation program)
  • img2pdf (TUI file format conversion program)
  • xviewer (GUI image displayer program)
  • xreader (GUI PDF displayer program)

Process overview

1) Scanning the physical document.
2) Initial edit, and virtual file export of scanned images.
3) Edit of image dimensions and watermark application.
4) Creation of alpha-less versions of the edited images.
5) Compilation of all alpha-less images into a single PDF file.
6) Test, organisation, and archiving of files.

Process explained

1) Scanning the physical document.

I use the flat bed scanner on a Pantum M6607NW laser printer scanner combo, in conjunction with a standard GUI GNU/Linux program called simple-scan. One by one I scan all the document’s pages using a 300 DPI (Dots Per Inch) image fidelity setting.

2) Initial edit, and virtual file export of scanned images.

I use simple-scan to export all the raw scanned images in a lossless PNG image file format.

Although simple-scan has some basic image editing functionality, such as image rotation and cropping; I tend to shy away from cropping images here due to the lack of precision available with the tool. However a rough crop to minimize image file size can be useful at this stage. Especially when scanning documents with a smaller page size (e.g. A5); which would otherwise have a lot of needless (memory consuming) white-space in each image.

Additionally, I find that rotating whole images at this stage using simple-scan to be a better experience than rotating them later using GIMP (or even xviewer). This is because, anecdotally: it seems to use less system resources for some reason. It’s just a smoother experience.

As for the outputted files themselves: I like suffixing metadata information onto the file name. In this case “_300DPI_scan”. This is to help identify specific files when they all get archived together.

It also adds a certain element of future-proofing because I may want to create higher or lower DPI versions of the same documents for specific purposes in the future; without it causing a naming conflict, and upsetting my global naming scheme.

Output:

generic_manual_p1_300DPI_scan.PNG
generic_manual_p2_300DPI_scan.PNG
generic_manual_p3_300DPI_scan.PNG …

3) Edit of image dimensions and watermark application.

I use GIMP (GNU Image Manipulation Program) to crop each page image with pixel perfect uniformity (i.e to the same image dimensions). Then I apply my watermark to each page and then export them as PNG images again. I mark the exported PNG files with the ‘WM_’ prefix to differentiate them from the original PNG images, which would otherwise have the same file name.

For the sake of clarity I should state that I keep all the original files (raw scan images) just incase I need to work with them again, and for some reason I do not wish to use the edited versions. It’s good practice to always keep and archive the original unadulterated images for instances like these.

Input:

generic_manual_p1_300DPI_scan.PNG
generic_manual_p2_300DPI_scan.PNG
generic_manual_p3_300DPI_scan.PNG …

Output:

generic_manual_p1_300DPI_scan.PNG
generic_manual_p2_300DPI_scan.PNG
generic_manual_p3_300DPI_scan.PNG …

WM_generic_manual_p1_300DPI_scan.PNG
WM_generic_manual_p2_300DPI_scan.PNG
WM_generic_manual_p3_300DPI_scan.PNG …

4) Creation of alpha-less versions of the edited images.

I use the terminal “convert” program to remove the alpha layers of every PNG image. This is because “img2pdf” can not compile PNG images into a PDF that contains alpha layers. (I.e. clear sections/layers within an image). If you try to, img2pdf will return an error message that contains additional instructions. Unfortunately it will still also output a 0 byte PDF file which you will have to delete.

Error message:

WARNING:root:Image contains transparency which cannot be retained in PDF.
WARNING:root:img2pdf will not perform a lossy operation.
WARNING:root:You can remove the alpha channel using imagemagick:
WARNING:root: $ convert input.png -background white -alpha remove -alpha off output.png
ERROR:root:error: Refusing to work on images with alpha channel

The “convert” command options assigns the background colour to the image as white. This is the colour that replaces any clear (or alpha) sections of the image. Next the alpha sections of the image are removed, then all alpha functionality of the PNG file is switched off.

Please note the exact order that the command options are passed to the program is not important, I only state this order for human understandability. Additionally the “convert” program does not actually convert the original files inputted into it, it instead outputs a modified copy. It will however overwrite the original file if you give the output file an identical name.

I suffix the “_no_alpha” label onto the the outputted files to differentiate them from their predecessors. Although as you can see the file names are getting long and unwieldy, especially if the manual itself already has a long name. However the various prefixes and suffixes all serve a purpose and are necessary for file version distinction.

Command:

convert WM_generic_manual_p1_300DPI_scan.PNG -background white -alpha remove -alpha off WM_generic_manual_p1_300DPI_scan_no_alpha.PNG

Input:

WM_generic_manual_p1_300DPI_scan.PNG
WM_generic_manual_p2_300DPI_scan.PNG
WM_generic_manual_p3_300DPI_scan.PNG …

Output:

WM_generic_manual_p1_300DPI_scan.PNG
WM_generic_manual_p2_300DPI_scan.PNG
WM_generic_manual_p3_300DPI_scan.PNG …

WM_generic_manual_p1_300DPI_scan_no_alpha.PNG
WM_generic_manual_p2_300DPI_scan_no_alpha.PNG
WM_generic_manual_p3_300DPI_scan_no_alpha.PNG …

5) Compilation of all alpha-less images into a single PDF file.

I compile all the watermarked no alpha layer versions of the image files into a single PDF file using “img2pdf” via the terminal.

Command:

img2pdf WM_generic_manual_p1_300DPI_scan_no_alpha.PNG WM_generic_manual_p2_300DPI_scan_no_alpha.PNG … -o generic_manual_300DPI_scan.PDF

Input:

WM_generic_manual_p1_300DPI_scan_no_alpha.PNG
WM_generic_manual_p2_300DPI_scan_no_alpha.PNG
WM_generic_manual_p3_300DPI_scan_no_alpha.PNG …

Output:

generic_manual_300DPI_scan.PDF

6) Test, organisation, and archiving of files.

This stage firstly involves testing if the PDF actually works as expected. Whether or not it is functional and whether or not all the pages contained therein are in the correct order. As well as rendering and scaling correctly. To do this I just try to open the file using Mint’s default PDF viewer program (namely xreader), and skim through the document’s pages.

This stage also involves putting each different collection of images from the various stages of this process into their own labelled ZIP format archive file. Then placing all these files into another container ZIP alongside the ultimate resultant PDF.

It is then placed into the local “device_document_scans” folder. Which is then copied over to the backups. Finally, I also upload the PDF by itself onto this website.

Output:

generic_manual_300DPI_scan.ZIP

Containing:

generic_manual_300DPI_scan.PDF
imageset_no_alpha.ZIP
imageset_raw.ZIP
imageset_watermarked.ZIP

Thoughts on tools and equipment

Hardware

As far as hardware requirements go, its just the bare essentials really: a decent scanner and computer. Neither devices need to be anything special, just fit for purpose.

Computer

As for computers, whatever computer you are currently using is likely to be just fine. The main thing that may become an issue is probably system RAM size; and even then only when scanning large (600+ DPI) multi-page documents at the same time.

This is because the scanning program will have to hold all these rather large images uncompressed within the RAM as you scan through the document. RAM may also become an issue when using image manipulation software like GIMP. If it is too low it may limit how many images you may work on concurrently. At the very lest it may limit your ability to do other things on the machine as you process these images. For example running a RAM greedy application such as a modern internet browser (e.g. Firefox or Google Chrome).

Another thing that may be a limiting factor with computers is CPU processing power. When converting file formats or compiling a series of images into a portable document file: your system may freeze or become unresponsive. Especially if the programs used/running aren’t optimised to be multithreaded. Resulting in the instruction sets all getting queued on the same CPU core and thread. This in turn causes the unresponsiveness as user input is queued behind these instruction sets.

To sum it up, any computer with more than 2-4 gigabytes of RAM and an early generation Intel i3 processor will likely suffice. However there are too many variables that may affect whether or not these system requirements are adequate; such as the desired scan image size, resource use of the operating system, scanning program, as well as background processes.

Scanner

Now onto the scanner. Most if not all modern flat bed scanners should be adequate. Chances are if they connect to your computer via USB 2.0 protocol or better than they are new enough to provide the 300 DPI (dots per inch) image quality that I use for digitising my manuals. If you are scanning photographs you may require a higher DPI rate such as 600 DPI to maximize image detail retention.

However since the value of my manuals is rather utilitarian in nature, 300 DPI is a fine image quality for my use case. By ‘utilitarian’ I mean that the information printed onto the manuals is what I am primary preserving, and not each page’s visual aesthetic. Because of this I just need them to be legible without necessarily preserving every minute page detail.

Heck, an argument could even be made to go down to a 75 DPI scan setting: as it’s perfectly useable whilst also minimizing all file sizes; including all intermediary portable network graphic images, as well the final portable document file.

However I find that working with 300 DPI images (which translate to a maximum of 2550*3507 pixels for an uncropped full scan) are a good compromise between image detail and workability/use-ability.

Example of 1200 DPI scanned image unable to be displayed with xviewer

Scan DPI example files


(Feel free to download and test these files on your own system.)

Scan image metadata translations

(Translations based on a scan of the full scanner bed of a PANTUM M6607NW)

Key: scan quality (Dots Per Inch) / image dimensions (pixels) / file size (bytes)

  1. 75 DPI / 637*876 p / 870.9 kB (lossless PNG)
  2. 150 DPI / 1275*1753 p / 4.2 MB (lossless PNG)
  3. 300 DPI / 2550*3507 p / 17.5 MB (lossless PNG)
  4. 600 DPI / 5100*7014 p / 62.8 MB (lossless PNG)
  5. 1200 DPI / 10200*14028 p / 211.9 MB (lossless PNG)

Software

Since my operating system of choice is Linux Mint running the Cinnamon desktop environment, I just use the programs that are either available with the initial install package as standard; or downloaded from the standard Ubuntu repository if necessary.

Simple-scan comes preinstalled with Linux Mint. It is the default scanning utility program. There are more robust alternatives such as ‘xsane’; however my philosophy with regards to tools like this is that one only upgrades tools or seeks alternative tools when the default tools are found to be wanting. I.e. when there’s a particular functionality or quality that the current toolset doesn’t provide; and since the default simple-scan program provides adequate functionality, I don’t need to seek alternatives just for the sake of it.

Moving on. Both GIMP, Image Magick and ‘img2pdf’ are available within the standard Ubuntu software repository. So both can be downloaded using the ‘sudo apt-get install’ commands. However it is recommended that you first use “apt-cache search [program]” command to ascertain whether or not they are available within whatever repository that you are using, if you are using another Linux distro to Linux Mint.

sudo apt-get install gimp
sudo apt-get install imagemagick
sudo apt-get install img2pdf

To sum up GIMP. If you are coming from Windows, you may be used to other image manipulator programs like ‘paint.net’ or ‘Adobe Photoshop’, if not GIMP itself since it is a multiplatform program and available on Windows. Anyway if you have used any modern full-suite WYSIWYG image manipulation program, then GIMP will be an easy enough program to jump on to.

Finally Image Magick. This is a software toolkit that you access via the Bash terminal. Many people, including myself prefer TUI based programs like this due to their ease of use, user interface uniformity, and functional robustness.

I often write scripts including commands that utilise programs that can be accessed via Bash. The programs provided by Image Magick are no different. Once a person gets used to using them, it becomes a natural progression to create scripts which then automate the process.

This would be useful for situations such as batch conversion of multiple files: as scripting allows the user to go AFK or do something else, rather than babysit the process. Scripting and chaining commands like this is probably the greatest strength of CLI/TUI programs over GUI programs.

Closing thoughts

If you aren’t already accustomed to using any Linux based distro, then one thing I recommend keeping in mind is hardware compatibility. It is probably this platforms biggest weakness.

This is specifically because most companies build their products to target the Windows platform. Often facilitating device functionality by using proprietary drivers, and oft times even programs: such as with proprietary controller programs for LED keyboards. These drivers are sometimes absent in Linux. However in most cases there are open-source alternatives.

In the past this used to be a bigger issue. Thankfully the list of supported peripheral devices has gotten much better as of late. As it is at the time of writing, and according to my personal experience as well as as some online reading: most devices work flawlessly plug-and-play; however, some devices work for the most part but are missing some advanced functionality, and some devices don’t work at all.

Unfortunately the best way to tell whether or not your device will work, is by simply plugging it in and fiddling with settings and open source drivers; until it either eventually works, or you give up. Whichever comes first.

As an example: I had quite a few issues with my system not recognising my Pantum M6607NW printer-scanner combo properly, despite official Linux drivers being available on the standard repository, and via the companies website. Even now, after resolving that problem and getting the thing working, I am still having some minor issues with the device.

For example if you paid attention to the images above, you may have noticed that Simple-scan allows for a 2400 DPI scan in conjunction with the Pantum M6607NW. Unfortunately this setting doesn’t work as expected. It does scan the document, and it does it noticeably slower than on the 1200 DPI setting. Which is as expected, due to scan heads collecting more detail from each page segment. However the resultant image has the same pixel dimensions as a 1200 DPI scan. So if there is a higher detail density, it isn’t reflected in a larger image dimension – as is the case with all other DPI settings.

Although xviewer failed to open images of this size, the Firefox browser did not; and upon visual inspection and detail comparison between the 1200 and the 2400 DPI scans: I have concluded that they are identical. See for your self, the files are listed in this article. Knowing this, it is likely that simple-scan is providing an option that the scanner can not support. Although the Pantum’s slower read speed on the 2400 setting has me doubting this conclusion. Since it seems to exhibit a programmed hardware response to this setting.

I could likely find the solution eventually by combing through the official generic M6600 series online manual for my machine, then hunt down more specific documentation … although it is frankly not a priority at this point. As I am not planning on using a 2400 DPI scan setting anytime soon. I only highlight this specific issue to make you aware of the kind of troubleshooting fun to expect on this platform.

So if you are moving to a Linux based platform for productivity purposes, well you can’t say that you haven’t been warned. Having said that, don’t let that stop you from using this platform for this purpose. When it works it works fantastically, and when it doesn’t there is always something that you can do yourself to make it work. You have to get used to being your own tech support.

Best of luck archiving your documents, and as always:
Thank you for reading.

Glossary of terms

AFK: Away From Keyboard
Bash: Bourne Again SHell
CLI: Command Line Interface
DPI: Dots Per Inch
GIMP: GNU Image Manipulation Program
GUI: Graphics User Interface
PDF: Portable Document File
PNG: Portable Network Graphic
PnP: Plug and Play
TUI: Text User Interface
WYSIWYG: What You See Is What You Get

Links, references, and further reading

#0031: Creating a TF2 themed RimWorld scenario mod

#0031: Creating a TF2 themed RimWorld scenario mod

Preamble

I recently decided that I’d like to try dipping my toes into creating mods for RimWorld. Incase you are unfamiliar with the game: RimWorld is a base builder; where the core objective is to try to create a functioning base or colony.

Building and maintaining this colony is achieved by issuing orders to various pawns. Examples of orders include: building the structures you designed, hunting animals, farming crops, fixing broken items, creating tradable goods, and cooking food to name a few. Additionally, it also includes arming up and engaging in combat.

RimWorld is a game that is in a similar vein to the venerable classic that is Dwarf Fortress. And like DF, once you have built a base that is halfway decent, you can then move on to secondary objectives such as: exploring the world, or actively trading and warring with other factions.

I decided to start modding RimWorld with something very small. Something that could be done in one or two sittings and with minimal research and planning. That way the mod doesn’t risk spiralling out of it’s initial small scope. Which can likely result in an eventual state of demotivation and ultimately project abandonment. Which is primarily caused by ongoing feature creep due to poor project management (scope discipline).

With that in mind: I decided on a simple custom scenario, coupled with a preset roster of pawns. For nostalgia’s sake: I decided to give this scenario a Team Fortress 2 theme. As a rule of thumb (and for obvious motivational purposes) I only really create things that I myself would like to play with. And to me at least: the idea of playing with the TF2 roster within the RimWorld settings sounds pretty fun. I hope you agree.

Creating a custom scenario

Scenario creator tool

Straight off the bat I should mention that the built-in scenario creator tool does not facilitate editing the individual starting pawns itself, just the world conditions and the equipment that they start with. In other words it does not allow for the modification of each individual pawn’s variables such as traits and skills. To specify pawn variables, one has to use an additional mod called “EdB Prepare Carefully”. Which I will discuss in more detail later.

With that said, let’s begin. Creating a custom scenario is a rather straight forward affair. All you need to do is navigate the menus within the RimWorld game, and follow their very simple instructions.

There are several game options available from the scenario editor. However for the most part editing a game scenario consists of choosing how many pawns the player is able to choose from, and then start with. Then choosing their starting load-out of equipment and resources: weapons, tools, food, animals, building materials, etcetera.

Additionally one could also add various world conditions such as periodic events (e.g. meteorite crash), permanent weather conditions (e.g. toxic fallout), a game time limit, as well as more wacky things such as every world pawn exploding upon death.

It seems like a rather fun thing to play with, however I only required an equipment list that vaguely resembled something that the real TF2 cast might have. I tried to give each pawn similar weapons and tools to the characters that I modelled them after. However, the group got little else in terms of general equipment and technology, outside a handful of exceptions for narrative reasons. Namely the ground scanner and a drill; since they are technically (narratively speaking) on this rim world in order to survey it for Australium.

I should also mention that I designed this equipment list with the mod “Simple Sidearms” in mind. In other worlds each character pawn was designed with the intention that they have the ability to carry more than one weapon. For example the Sniper was given either a sniper rifle or recurve bow to equip as a primary weapon, with the gladius (a functional surrogate for his kukri or bush knife) to be used as a secondary weapon (sidearm). Although the mod itself isn’t strictly necessary. If you choose not to use it, you’ll just be saddled with an abundance of surplus weaponry sitting in your stockpiles. That’s all.

Pawn equipment list

Sniper:

  • [x1] sniper rifle
  • [x1] recurve bow
  • [x1] plasteel gladius

Pyro:

  • [x1] incendiary launcher
  • [x1] molotov cocktail
  • [x9] incendiary shells

Scout:

  • [x1] shotgun
  • [x1] wooden Club

Soldier:

  • [x1] shotgun
  • [x1] plasteel breach axe
  • [x3] triple-shot rocket launcher

Engineer:

  • [x1] shotgun
  • [x1] autocannon turret
  • [x1] plasteel mini-turret

Medic:

  • [x1] revive serum
  • [x18] medicine (x18)
  • [x1] vitals monitor

Heavy:

  • [x1] minigun

Demoman:

  • [x1] frag grenade
  • [x1] plasteel longsword
  • [x60] beer (x60)

Spy:

  • [x1] revolver
  • [x1] knife

Miscellaneous:

  • [x1] ground penetrating scanner
  • [x1] deep drill
  • [x18] packaged survival meal

Alongside the equipment list, I also wrote some flavour text for the scenario. However, as far as what I wanted to achieve with this mod, this is as far as the scenario editor went. The next thing I needed to do was edit the nine random pawns the scenario provided into the TF2 mercenaries using the Prepare Carefully mod.

Creating custom pawns

In order to create my custom pawns I needed the mod “EdB Prepare carefully”. This mod allows the player to edit their pawns to a far deeper level than the standard RimWorld tool does. Which only really allows rolling a completely new character with randomised stats. Before this mod, I remember having to keep clicking the randomise button repeatedly until I eventually got something halfway decent. A process that honestly gets old rather quickly.

Using this mod I created a custom nine pawn preset. With each pawn having their own unique appearance, backstory, traits, health conditions, and skills. Once finished I saved this configuration locally, in a file named “TF2_crew.pcp”. It saved as a custom XML file that was suffixed with “.pcp”, which I assume stands for “Prepare Carefully Preset”.

And that’s it. That is all that there is to the process. Easily really. Although I must say that: it actually took me several hours to get all nine pawns’ various stats just so. This is because I can be a rather pedantic perfectionist when it comes to the little (read: insignificant) details. Things like which eye is the Demoman missing (left), and whether I should give him a peg (left) leg or not.

That’s not even to mention assigning each pawn’s skills, since they absolutely have to be (in my mind) representative of the character. This was then exacerbated by the fact that I also tried (and mostly failed) to balance the pawns in terms of usefulness and general colony value. That is, as well as retaining each character’s unique flavour; like Pyro’s oddness, or Demoman’s alcoholism. Needless to say it took some time to settle on such things.

This balance of priorities, often working against each other ended with a reasonable compromise in the final version. At least I think so. Still, I learned that Engineer and Medic are by far the most useful pawns in application, and that if you allow the other pawns to drink Demoman’s beer, causing you to completely run out by day four … well let’s just say that I nearly put a bullet in him myself, after his third low mood tantrum due to the alcohol withdrawal debuff coupled with his natural pessimism.

Scenario narrative and expected gameplay

Explaining the narrative premise

Yes, there is indeed a story here. There is reason for these guys to be on an extraterrestrial planet 3000 years in the future. The story is rather simple. 3000ish years ago, after exhausting the Earth’s supply of Australium: TF Industries decided to look for it on other planets.

So they built a fleet of Mann Co. brand low cost space rockets. A thousand of them. Each rocket containing 9 cryogenic life support pods designed to keep it’s occupant in a state of suspended animation. The occupants naturally being clones of the mercenaries. Cheap, useful, and expendable. These clones were then shot into space with the mission to survey any planets that they land on for Australium.

That is if they actually make it to one. And after three thousand years of drifting in space, and against all odds: one rocket managed to actually make it to a habitable planet. It also somehow manages to deposit it’s cargo of crypto-sick mercenaries and their gear; just in time to avoid catching them in the fires of it violently exploding in the planets atmosphere.

Now these mercenaries find themselves on a hostile planet with minimal supplies other than guns. And with no direction other than a 3000 year old order to survey the planet for a rare resource.

Just for the sake of clarity, I should mention that the resource Australium is not implemented within this mod. It is purely a narrative plot device. Funnily enough, implementing an extra resource like this is exactly the type of feature creep I mentioned earlier that end up killing my projects. Its a rabbit-hole that I don’t want to go down, nor need to go down as I simply want to bang out a small mod that consists of a custom scenario and character roster. That’s it.

In-game scenario text

Incentivising gameplay

I designed this setup for a combat heavy game. Since the player only starts with 18 meals (enough to feed a team of nine for about a day), no money for trade, no animals, and only a little medicine – it incentivises more aggressive actions in order to survive the early game. Especially at higher difficulties and challenging world conditions. Keyword: “incentivise”, not force.

The players are encouraged to strip the map of resources early. Steel, components, herbal medicine, berries, as well as deconstructing buildings for their materials, and attacking the ancient danger room much earlier than usual. This is because they don’t have the time to build up resources normally; by for example farming and stone cutting blocks. The nine pawns will just eat too much in the mean time.

Additionally the fact that every pawn also has the “psychopath” trait means that many of the drawbacks to bloody play-styles are removed. Such as emotional debuffs due to executing prisoners, butchering humans, or harvesting organs.

All of the above factors leaving early bloody aggression as a very viable and deeply incentivised play-style. Basically, I designed the TF mercenaries to play like the TF mercenaries. In other words: a hostile invasive violent paramilitary force, and not an agrarian farming community. Thank you very much.

Having said all of this good stuff, I should also parenthesis it with this final sentiment. Don’t feel like you have to to play these characters out in the way that I designed them. Feel absolutely free to tinker with them however which way you wish. Is Demoman’s alcoholism annoying you? Remove it. Don’t like how slow Heavy is? remove the Slowpoke trait from him. Pyro burning down your cornfield in the middle of the night for no reason? You get the message. Although I designed things in a way I personally find compelling; it’s your game at the end of the day. Play it your way™

Technical issues explained

RimWorld UI and editing XML files directly

Although creating a scenario using the in-game menus is simple enough, the cumbersome basic user interface for this can get rather frustrating rather quickly; because of this I quickly made an initial save of the scenario with all the basic fields and variables populated. This is in order for the RimWorld binary to create an XML file for the scenario. A file that I then chose to edit directly with a text editor.

I find that editing XML files in this way with a text editor to be a preferential experience, to fiddling with the game’s user interface. For example if I wanted to reorder a list of items, by taking the bottom item to the top of the list or vice versa: using a text editor it is as simple as cutting and pasting the textual data set into it’s desired place.

Whereas doing this within the game requires one to to click on either the up or down arrow button in order to make the list item move a singular place; by swapping positions with it’s respective neighbour. I should also mention that when the item swaps position with it’s neighbour, the list item itself moves on-screen and not the list items around it. This means that you can’t just keep smashing the arrow button to skip several places quickly; because the button moves from under your mouse once clicked. Imagine wanting to order a list alphabetically and then having to do something like that for every item on a 30 item list. Tedious. You’ll have to essentially manually bubble sort the entire list.

Using a text editor to bypass this went fine for the most part. I was able to reorder the populated list of equipment easily enough, as well as change item quantity and material (when applicable). I did however reach a sticking point here. After several edits I found that the RimWorld binary no longer recognised the scenario file. Something within the file was breaking the program’s ability to read data from it.

Naturally I thought that it was a syntax error. Maybe I forgot a character of syntactic punctuation somewhere, or forgot to enclose an XML data set properly with their data syntax. No. After much double checking for errors; followed by more back and forth: changing one thing at a time within the file, then rebooting the RimWorld game to test if it now recognises it. Well after that process, it turned out that adding comments to the data set broke the RimWorld binary’s ability to read the file. Just for clarity when I say comments I mean fully syntactically correct XML comments.

Example:

 <!-- this is my comment -->

I can only conclude that the XML interpreter within the RimWorld binary for whatever reason does not have the functionality to understand comments and skip them. At least when specifically talking about reading data from either files that it originally generated or from scenario files in general (.RSC file format).

After a little additional testing: apparently I may put comments in and have it still load but only if it is not within the <parts>…</parts> section and in between list items <li>data</li>. In other words as long as the comments aren’t in the only useful place to have them, in order to meaningfully separate a run on list of items into recognisable categories.

I’m guessing that the RimWorld interpreter probably has a very rigidly structured read protocol; and why shouldn’t it, since it is only expecting to read files it itself created. Please note that I am not an expert on XML nor the interpreter RimWorld is using, I just can’t help speculating when I observe such behaviours.

Honestly it much doesn’t matter anyway, I only mention my experience here in the case that you choose to add comments to your code base, and then it suddenly stops working despite having no syntax errors. I hope to save you some time needlessly troubleshooting and head scratching.

Text-field character limits

The RimWorld Scenario editor has an upper limit to the number of characters each text-field box can contain. (As of version 1.3.3117). I first realised this when my initial draft of the scenario description did not fit into the intended text-field box when pasted into RimWorld. What followed was a tedious process of trimming my sentences (narrative) until it finally fit.

To save you having to do the same, I decided to get the character limits of each text box. The process that I used was by inputting the character ‘0’ into each text field until full. Then CTRL+A, CTRL+C, and CTRL+V into an empty text file. I then used Xed text editor’s built-in tool that counts words and characters to get these results.

Text-field character counts:

Title: 55
Summary: 300
Description: 1,000
Game start dialogue: 20,000+*

*doesn’t seem to have a fixed upper limit (possible dynamic text-field)

Character count files:

Equipment parity issues

I spent quite a bit of time going back and forth between the scenario editor and the equipment section of the Prepare Carefully mod; as I wanted to make uniform both lists. Even though technically only the Prepare Carefully equipment list actually matters from a gameplay perspective since that is the one that overrules the scenario’s list and actually makes it’s way to the game. I still wanted both lists to be the same since the scenario list is the first one the player sees, and consequently informs them of what to expect.

I should mention that the reason why the two lists of equipment weren’t always identical is because as I edited the pawns, and looked through the (quite frankly better) Prepare mod’s equipment chooser: I got motivated to give and take equipment. For example I added the Auto-cannon to the list rather late, as only once seeing it’s graphic in the mod’s loadout section: did I get inspired to use it as a surrogate for the Engineer’s big turret.

In order to avoid this tedious back and forth editing, I would recommend that you plan ahead and write down the complete equipment list before initially creating a custom scenario. Alternatively, don’t worry about the scenario editors equipment list at all while making the custom pawn presets. Instead circle back to it at the end and essentially paste in the equipment list from Prepare Carefully.

Instructions for running this scenario (GNU Linux)

1) firstly make sure you have the mod “EdB Prepare Carefully” installed
2) download the file archive here: “rimworld_tf2_scenario.ZIP”
3) unzip the file archive
4) move/copy the file “TF Industries Australium survey force.rsc” to “~/.config/unity3d/Ludeon Studios/RimWorld by Ludeon Studios/Scenarios”
5) move/copy the file “TF2_crew.pcp” to “~/.config/unity3d/Ludeon Studios/RimWorld by Ludeon Studios/PrepareCarefully”
6) boot the game, it should show up as a custom scenario
7) choose it, then click on the “prepare carefully” menu button
8) click the “Load Preset” button and choose “TF2_crew”
9) edit the pawns to taste
10) start game

Note: Windows and Mac instructions are basically the same but with some variation around the location of the RimWorld data directory.

Closing thoughts

Simple one this time round. Like I said I wanted to complete something small in a timely manner, just to dip my toe into the waters of modding RimWorld. I hope you enjoy playing with this mod as much as I did. Hmm, is this thing even technically a mod? Does it matter? I guess not.

Funny. It actually took me longer to do this write up, then it did to create the actual thing that it is about. I guess that’s not that odd, considering the fact that explaining things properly can take it’s time. Especially with my signature caveats and addendums … and waffling.

I hope this article motivates you to at least try out getting into modding games, if you aren’t doing it already. Never be afraid to start something new. But start small. Start with something that can be completed within a timely fashion and with your current skillset. Then iteratively add complexity with each later thing that you complete.

I know this is rich coming from me, but remember that “perfect is the enemy of done”; and that there’s nothing more motivational than finishing something that you set out to do. It’s better to complete 100% of something small, than 66.6% of something big. One can be released out there for people to enjoy, whilst the latter is rotting in your “ongoing_projects” folder. Or maybe that’s just me.

Thanks for reading.

Mod Files

(Please check the downloads page for the most up to date version, incase I forget to update the link here.)

Term Glossary

UI – User Interface
XML – eXtensible Markup Language

Links, references, and further reading

https://rimworldgame.com/
https://steamcommunity.com/sharedfiles/filedetails/?id=735106432
https://steamcommunity.com/sharedfiles/filedetails/?id=927155256
https://rimworldbase.com/prepare-carefully-mod/
https://rimworldbase.com/simple-sidearms-mod/

#0030: Game review: Princess Remedy In a World of Hurt

#0030: Game review: Princess Remedy In a World of Hurt

Preamble

I wanted to feature this game here as I think it is rather interesting, and I have a few comments I’d like to make about it. The game is called “Princess Remedy In a World of Hurt”. It is a bullet hell game created for the PC platform. However interestingly, it was made with the design limitations of a game targeted for the Nintendo Game Boy Color portable games console; as least it appears that way superficially. This article will feature a review and discussion of the game as well as a play-through or two to demonstrate the gameplay, visuals, sounds, and general game mechanics on offer.

Screenshots

Pixel art samples

  • Sprite size: 16×16 pixels
  • Grid cell size: 16×16 pixels
  • Grid size: 10×8 cells
  • Status bar size: 160×12 pixels

Game Review

Before I begin it is important to get a bit of context on the circumstances of this game’s creation. According to the read-me file that I found within the Steam version of this game’s directory (‘remedy.txt’): ‘Princess Remedy In a World of Hurt’ was originally created in 2014 during a livestreamed four day charity game jam by a group of four people. This initial completion constituted their version 1.0.

Although this version lacked certain additional features present in the latest 1.5 version that I have played. Features such as: gamepad support, options menu, multiple difficulties, and additional endings. The version 1.0: although undoubtedly rough – established a set scope of gameplay mechanics, narrative, and player experience; that was then subsequently refined and improved upon.

With the exception of the ‘Jealous Chest’ mechanic and extra endings: the additional refinements were mostly quality of life features, and in my opinion do not necessarily constitute raw additional game content. Like a new area, or new enemies would for example. As such the final version still feels like a game that could arguably be created within a short amount of time.

I only mention the game’s humble origins because it is apparent by the restricted scope of mechanics present, story, and short playtime: that not much in terms of resources actually went into the game’s creation. By resources I mean time taken to either plan a deeper narrative, create additional gameplay mechanics, or create more materials (i.e. media like image sprites and sound effects). Add to that the necessary programming time taken to implement and test every additional element.

Although it may come across as a criticism, I do not mean it as such. Rather it is by virtue of it’s spartan nature that I am attracted to this game to begin with. I wish to emulate it in my own way, and create a similar title as a practice game for a larger project I have in mind. Also I rather like the minimalist approach to game design presented here, a design that discards all but the essential components needed for a viable gameplay loop. As a hobbyist game designer who has discarded games mid-development due to feature creep (and the frustration that it incurs): I actually admire an approach that respects the limitations of available resources and deadlines, and operates with a more prosaic ‘get it done’ attitude as a consequence of that.

Now onto the game itself. The core gameplay experience consists of walking around and exploring a classic 2D RPG world like the one of ‘Final Fantasy I’ (FFI). Here you find people to talk to, to then enter a battle instance with. This battle instance consists of a unique bullet hell mini-game; as every NPC has their own custom setup of enemies and terrain layout to contend with. Winning these battles provides rewards that come in the form of a stat boost called ‘Hearts’. Hearts are the most important stat booster in the game. This is because in addition to marginally boosting the characters health (or hit) points, a set number of them are also needed to open the specific gates that lead to other map areas, and thus progress the game.

The game has a simple and concise gameplay loop. It may superficially look like a classic RPG title such as FFI, however all the extraneous RPG mechanics from a game like FFI are not present here. There are no items (beyond gate-keys), status effects, nor any character abilities, or levelling. There is however a basic system of stat progression that involves collecting stat tokens.

The full list of stat tokens include: the aforementioned ‘Hearts’ which marginally improve HP, ‘Power’ which increases shot damage, ‘Multi’ which increases the number of shots fired at a time, ‘Regen’ which increases the HP regeneration rate, and finally ‘Flasks’, which increases the number of uses of the special attack action during combat. In addition to Hearts, all of the other stat boosts are exclusively found in chests dotted around the various towns and caves.

The actual game world itself consists of a simple world map, which links together a series of higher fidelity maps. These higher fidelity maps primarily come in two forms: towns and caves, but also includes a few castles, a pond, and several other unique areas. The world map only contains heart-gates and key-gates. It is the higher fidelity maps that contain all other interact-able objects. These come in two formats: NPCs and chests. Each NPC only offers a quick dialogue on interaction. This dialogue either contains game hints, or instigates that specific NPC’s bullet hell mini-game. Whereas the chests contain either stat upgrades, or keys for opening shortcuts.

I should mention that the higher fidelity maps also contain a puzzle element. Some chests are set up in a way to resemble secrets from other visually similar RPG games. Specifically, in order to get to them they require the player to walk off of the displayed tile area, by passing through normally impassible terrain tiles (like walls): into the black space in and around the map that traditionally denote impassable terrain. Links to secret paths like this are marked by a slight imperfection on the terrain tile that connects to them. Thus marking it as passible terrain.

That’s it. That’s the game. Talk to people, then win the bullet hell battles they offer to get hearts; find chests, get stats, and more hearts; then open the heart-gate to get to the next area. Rinse and repeat until you get to the final boss. Where you play an extended bullet hell battle. Done.

The only real deviation from this formula that this game offers is via the ‘Jealous chest’ mechanic. Within an advanced area of the game – one that is gated by three separate heart-gates, and hidden within the town map there: exists the Jealous chest. This is a special chest that will give the player a shot power boost, but only if the player has not opened any other chests before it. Meaning that in order to acquire that shot boost, the player will have to win (nearly) every battle up to that point in the game without any of the stat boosts that the other chests offer.

This challenge adds significant difficulty to game and I personally found it rather engaging. However there is a down side to this. The problem comes in when you actually get the Jealous chest. Shortly after opening the chest and getting the extra power boost contained within, the player gets the contents of all the normal chests in previous areas, even the ones hidden by secret passageways that may otherwise be missed.

This gives the player a very sudden and dramatic power boost. Which on one hand feels great, due to the fact that up until this point the player has been surviving the battles with mere base stats. ‘Surviving’ being the operative word here for the experience. Then all of a sudden you gain all the stat boosters from three zones, giving you the power to nuke previously troublesome enemies like the Ghosts.

The problem with this sudden dramatic power gain is that it causes an inversely dramatic drop off in game difficulty. Even though technically the enemies fought in the later game after this point are stronger than the previous enemies, the same level of planning and skill required to survive up to this point and win battles is no longer necessary due to the raw power output the player now has.

This phenomenon causes the player to experience a significant spike in difficulty in the mid game levels just before acquiring the Jealous chest. Which is then not surpassed by any of the other following levels, including the final boss fight. This is due to the smaller disparity of power between the enemies and the player. In other words once you get the Jealous chest, you can essentially coast the rest of the game, even though you’ll technically be fighting stronger enemies. It will not feel like it.

Luckily the Jealous chest is an add-on mechanic, and is only really necessary if you wish to get the full 101% completion rate. If you don’t care about that, then you’ll likely experience a far more gradual and balanced difficulty curve as you progress through the game the normal way: haphazardly collecting (and missing) chests as you go.

Moving on. As for the bullet hell battles themselves, they are also very simple. They consist of manoeuvring an auto-firing character around, and occasionally using the catch-all action button to throw a flask; which functions as a grenade: doing AOE damage across a three-by-three (nine square) grid. The standard shots fire automatically from the character as is standard fare in bullet hell games.

What isn’t standard is the fact that the character can change which direction she is facing; meaning that in Princess Remedy you can fire in all four directions. This is due to this game taking place in a sandboxed square area. Unlike more traditional arcade shooter bullet-hell games, which tend to play out within either a vertical or horizontal scrolling stage. In which the player character’s firing direction is fixed to face in the direction that the stage scrolls into frame from. As that’s where the enemies are coming from. The most typical example of this, is that of a spaceship themed vertical arcade-style scroller like ‘Ikaruga’.

Ikaruga Steam trailer

The enemies in this game are rather varied. There is a mixed bag of enemies with differing movement patterns, health points, and who emit different shot types, in different numbers and frequencies from each other. There are enemies such as the Spike-ball which just moves towards the player when within it’s line-of-sight, as well as the Ghost which follows the player whilst also intermittently disappearing and shooting a terrain piercing shot toward the player. There are a handful of iterative enemies like this Ghost, i.e. harder versions of previous enemies. They utilize the behaviour patterns of the previous lower tier enemy, but then with a little extra mechanic added on.

I should also note that enemy behaviour actually changes across the difficulty levels. To me it is always a pleasant surprise when the actual enemy AI is tweaked to be more difficult on harder levels. In my experience playing video-games in general: it is far more common to see developers simply tweak stats like shot damage and hit points, then call it a day. All whilst maintaining the exact same enemy patterns of behaviour. This game probably does buff the stats of enemies in the harder modes, although I haven’t played enough to verify enough to the point that I could confidently state so here. Its not important either way. What is important is the fact that the enemy AI is tweaked and geared for the difficulty.

An example of this would be with Bat enemies. Bats are enemies that move in a random direction at a set time interval. They damage the player by colliding with it. In normal mode when a Bat dies, it simply dies. In hard mode, Bats shoot out three regular bullets in the direction of the player upon death, and in master mode the Bats move considerably faster whist also doing everything from the lower difficulties.

Moving on, now let’s discuss the more technical specifications of this title. This game was made using the Game Maker engine and targeted the PC platform. However more interestingly the game was visually designed to imitate a Game Boy Color game. It has the same resolution as GBC games (160×140 pixels), as well as similar colour pallets, and sprite types (8-bit era 16×16 pixel sprites).

It also uses a severely limited range of player inputs for interacting with the game. Although it maps multiple buttons/keys to each input type. For example the ‘action’ input is mapped to multiple keys including Enter and Spacebar. This is where my first real criticism of the game comes in. The ‘action’ key, the in-game results of pressing this key are highly contextual.

If pressed next to an NPC, it will engage them in dialogue; if pressed whilst moving, the player character will start running in the same direction; and if pressed whilst stationary and not facing an adjacent NPC, it’ll pop up the menu screen. Needless to say it has caused me to misclick a couple of times. Mostly by throwing up a menu when I intended to run. But that could just as easily be an issue with me and my keyboard. Although I found this catch-all action key to be a rather clunky method of input, it is ultimately a relatively trivial matter.

The main thing that I have encountered within this game that is actually worthy of criticism is it’s pixel scaling methods. At higher resolutions than the base times one (‘x1’) or 160×140 pixel screen size, it looks absolutely awful. The sharp clean pixels at the base size get blurry even at the times two (320×280) screen size.

Honestly, I am not sure about the technology being used here to resize the window and rescale the display assets within it. If I were to guess, I’d say that it is the functionality of one of Game Makers image manipulation libraries. Judging from what I can observe: I assume the image is actually being scaled using a form of on-the-fly interpolation. Such as linear or cubic interpolation.

Essentially algorithms (used within this application that are) designed to guess at what colour the pixels should be within the newly created empty regions, between the separated pixels of an upscaled image. Unfortunately, they do not have the context that they are dealing with a pixel art that requires clean lines, and consequently blurs edges in a bid to establish some kind of smooth colour gradient. At least that’s my guess as to what is going on here.

I think the window and asset scaling in this game was something of an after-thought honestly. Especially since the ‘options’ menu containing it was introduced to the game in it’s 1.1 patch. After the conclusion of the game jam. Meaning that the 1.0 game jam version was only made with the fixed Game Boy Color screen size in mind, and with all assets specifically scaled for it.

Only afterwards did the programmer who is maintaining the game decide to add additional resolutions. Unfortunately they did not recreate the art assets to be more scalable. For example by using large images designed for a modern full-screen display (e.g. 1920×1080) then scaling them down.

Although this has it’s own issues such as image artifacts being created by aggressively compressing image dimensions. However in my opinion, a little artifacting is considerably more palatable then the horrendous blurring incurred by the current solution of upscaling tiny images to large resolutions. This is most comically apparent when in full-screen mode. Imagine what happens when you upscale (essentially stretch) a screen with a height of 140 pixels to 1080 pixels. Needless to say it is virtually unplayable.

Now upon hearing this, you may think to yourself: why don’t they do that? I mean it won’t take long to use an image manipulation software (like GIMP) to upscale each image asset by raw pixel doubling; and without introducing blurring via interpolation techniques. Well, another reason why the current maintainer of this may not want to use upscaled images may include the related image code itself.

Considering that this game was initially made in a span of four days for a competition, then things like proper planning and future proofing of code goes out the window. It it very likely that this game has been hardcoded with strong references to the current image dimensions throughout the codebase. If so it will also explain this slapdash box-ticking approach to getting higher resolutions, as it likely avoids having to deal with the technical debt incurred by hard coding the image dimensions into the game logic in this manner.

What do I mean by this? Let me illustrate the problem here: imagine if you had a line of code that moved the Bat enemy for example. This enemy moves every other time tick in a random direction the full length of it’s size (16 pixels). Assuming that the code for this is hardcoded (i.e. code containing asset dimensions, or verbose inflexible instruction sets). Then the code for moving the Bat across the vertical axis may look something like: ‘moveBatY(){bat.posY+=16; updateSpritePos();}’.

Now, let’s say you want this game to also work at a times two scale (or at 320×280) resolution. This instruction set will have to be modified to allow the bat to move it’s full length. Which is now 32 pixels and not 16. If the code is left as is, then the Bat will no longer function as intended/expected. This is a form of technical debt, i.e. creating your codebase in way that will require reformatting/rewriting before significant additions can be made to the feature set; by in this case adding additional game resolutions. And considering the volume of image interactions going on in this game, likely having to reformat the entire codebase will not be a trivial matter. After a basic cost-benefit analysis, the programmer here probably deemed it not worth pursing at the time. Instead opting for the sub-optimal (yet viable) solution that is currently implemented.

Resolution scaling screenshots

Manually scaled examples

I’d like to leave this review on a positive note. As I was doing a final check after writing this review I found out that the main developer, namely Remar has very recently updated this game. The latest version at the time of writing is 1.6; however this version (at lest currently) is only available on Remar’s personal website. The update included a few things: most notably a dedicated run button. Which addressed some of my criticisms I had made prior. So discount those if you play version 1.6.

The real cool thing about this situation is that you have a developer here who decides to update a now 6-7 year old game; that they made available to the public for free. That’s a developer that cares about their craft, and cares about their legacy. At least I’d like to think so, I mean they sure as shit aren’t making any money off of it. And as a guy who is currently trying to port his old garbage flash games to HTML5 – love for the craft and posterity is the only real reason I can see for a person to go out of their way and tweak/improve something like this.

If you want to play ‘Princess Remedy In a World of hurt’ it is available on the Steam PC platform and also on the Developer’s personal website with no Steam DRM. Download link: ‘https://remar.se/daniel/remedy.php’. It is a free game so give it try. If you really like it you may even want to purchase it’s paid sequel: ‘Princess Remedy In a Heap of trouble’.

Video Play-throughs

  • Difficulty: NORMAL
  • Completion: 101% (jealous chest run)
  • Time: 42 minutes 5 seconds
  • Game version: 1.5 (Steam)

  • Difficulty: HARD
  • Completion: 101% (jealous chest run)
  • Time: 52 minutes 41 seconds
  • game version: 1.5 (Steam)

Game credits

Game copyright Remar Games and Ludosity 2014
Design, script, code, SFX edit: Daniel Remar
Design, graphics: Anton Nilsson
Music, SFX: Mattias Hakulinen
Final boss songs: Stefan Hurtig

Nintendo Game Boy Color reference images

Closing thoughts

This is the first actual game review that I have done on this site. I hope that it proved to be insightful and useful to you. Whether you wish to simply play the game (and you really should as its free on Steam), or whether you are simply interested in a hearing about a game’s systems and the interplay of mechanics within it. I hope that I at least entertained you if nothing else.

The main reason why I covered this particular game is because I intend to create a clone of this title. I like the very limited nature of it, and genuinely think that I could make a copy in order to sharpen my skill-set. I needed a small but genuine gaming project to try out my tools, and to practice my pixel art and music creation. So look out for that game when it is out. I’ll pop a link here for it when available.

It’s sort of funny when when certain coincidences happen, and one gets that feeling of living in a small world. The name ‘Ludosity’ came up a couple off times while I was reading up on this game. I just thought ‘huh, that rings a bell’ and moved on. Its only once the review was almost finished and I decided to actually visit the website links within the readme files, did I find out who it was. They were the people that made ‘Card City Nights’, a game that I really enjoyed years ago. I even 100% the game, blue Steam ribbon in all. I remember it was also a game of limited scope, being just a series of card battles attached to something almost akin to a visual novel, with a simple collectable card game or deck builder core. It also had a simple but very compelling gameplay loop, and lovely art…

On that final nostalgic note, I’d like to say:

Thanks for reading.

Glossary of terms

2D – 2 Dimensional
AI – Artificial Intelligence. Although in this context it doesn’t refer to real AI, but rather patterns of situational behaviour that non-player controlled entities engage in or act out.

AOE – Area of Effect.
NPC – Non Playable Character
RPG – Role Playing Game
Sprite – Simple low resolution image of an entity. E.g. NPC.

Links, references, further reading

https://store.steampowered.com/app/407900/Princess_Remedy_in_a_World_of_Hurt/
https://steamcommunity.com/sharedfiles/filedetails/?id=716757641
https://www.nintendo.co.uk/Corporate/Nintendo-History/Game-Boy-Color/Game-Boy-Color-627137.html
https://remar.se/daniel/misc/themegaupdate.txt
https://www.remar.se/daniel
https://www.ludosity.com

Game text files:

#0029: Dev-blog #001: plans and preparations (‘Remote PI’)

#0029: Dev-blog #001: plans and preparations (‘Remote PI’)

Preamble

I wanted to create something of a diary to collate my thoughts, experiences, and general progression towards completing a large multimedia project. One with a significantly wider time-scale and scope of disciplines than I am accustomed to. It will be a multi-session project that will require several months to assemble the knowledge and materials, necessary to ultimately produce a playable video game demo. The working title for this project will be “Remote PI” or “R-PI” for short. Not to be confused with “RPi” (Raspberry Pi).

This project will require creating a proof of concept video game demo as a solo developer. By ‘proof of concept’, I mean a vertical slice of gameplay that will feature all the core gameplay mechanics layered atop each other in a cogent manner to form a functional and hopefully compelling system. The demo will also require thematic art and music, as well as featuring a narrative hook designed to entice players to support the production of the full game.

Essentially I plan on documenting the creation of a playable teaser trailer here. In addition to writing about the development of the demo itself, I will also document my personal relevant skill progression. Since this project will require me to become practiced at all the disciplines that are needed, for realising an interactive multimedia product like a game; I will need to acquire passable skill in art, music, as well as brush up on my general programming fundamentals. During this process I will no doubt have to also learn a few new tools (IDEs, programming languages, etcetera); and that’s not even to mention the more mundane skill-sets like script writing, project organisation, and time management.

This series of post entries will journal my progress in this endeavour.

My current skill-set (or lack thereof)

Before we go on I think it’s best that I identify to you where I am currently with regards to my skill-set. This is in order to understand what I need to work on to realise this vision. At the time of writing this (2021), I have been a hobbyist programmer for a few years now (since 2013-14 I believe). Being a hobbyist I have pursued programming for pleasure primarily.

This however meant that I was largely unguided and unfocused with regards to what I learned and how. This resulted in the collection a more shallow and eclectic experience base, rather than a more guided and structured one (i.e. comprehensive and useful), like the kinds associated with formal education. More aptly it also dictated how long I stuck with things after the initial fun dries up and the laborious hard-work begins. But that’s a more personal failing of mine. I can’t blame the books that I bought, for me not reading them. I mean most of them had pictures and everything.

Anyway, I started with making short text adventures in an IDE called Bloodshed Dev-C++ using the C++ language. On a technical inspection: these games could be boiled down to basically a series of if/else statements, with their function calls nested within another series of if/else statements. If I got fancy, I might even throw in a switch statement here or there. The only one I remember completing and being proud of was called “A little after midnight”.

It’s most notable feature was that it had a rudimentary inventory system that simply fed into the cascade of if/else statements that constituted that game. I vaguely remember codifying the handful of items in the game as individual boolean variables. Example use-case: “if (isGotKey == true) open_door(); else locked_door_dialogue();”, or something to that effect. Like I said it was primitive.

At the time: I remember just wanting to create a game, any game, and have it work. I liked “choose your own adventure” books like “Island of the lizard king” by Ian Livingstone, and the various MSDOS text adventures that I played as a child, all who’s names unfortunately escape me right now; and simply wanted to have a go.

After wetting my feet with text adventures, I transitioned to making flash games using the ECMAscript derivative language Action Script 3. This was on the FlashDevelop IDE. In this time of approximately 3 years I only managed to complete two games. Keyword: ‘complete’. A basic 20 level platformer with godawful hit-detection called “Runaway Units”, and a single room point-and-click adventure called “Last Life: The Blue Key”. In both games I was helped by my friend Kross Drayllih who created the music to Runaway Units and did the majority of the art and all the music for Last Life. However, I did the artwork for the user interface for that game. This will be relevant when I discuss the pixel art of this demo later.

Around this time I started dabbling with modding games. I created a few JSON mods for a community driven game called Cataclysm: Dark Days Ahead. Notables include: a mod that gives early game access to mutagens via (IIRC) an XCOM-esque gene-therapy clinic building, and a more general extra buildings mod called “OA’s additional buildings”. The later of the two mods even got folded into the main game. If you ever find a sex shop, internet cafe, walled park, cruise ship, or ranger’s lookout: well, that might be my work you’re exploring. In addition to CDDA I also created a few simple ship mods for a space logistics game called Endless Space.

After this I fell out of love with making games, and moved onto playing with mini-PCs and microcontrollers. Namely, Raspberry Pis and (clone) Arduino boards. I wanted to make robots. With these I was introduced to Linux based operating systems, specifically Rasbian. Which I accessed remotely over WIFI (via secure shell protocol) using a program named Putty on my Windows 7 machine.

I remember using bash as a remotely accessible text user interface to run custom bash scripts and programs written in Python 2. These programs were for controlling the RPi’s GPIO (general purpose input output) pins. Which in turn controlled peripheral components like sensors, lights, relays, and motors. These programs were very utilitarian in nature and simple, as they were merely concerned with converting button presses into signal outputs, or interpreting signal inputs from the sensory peripherals and converting them into a textual output for the console. The RPI’s GPIO library was the one who really did the heavy lifting there. Similar story with the Arduino IDE. My instruction sets where pretty basic there too.

As you probably have put together, I am just a hobbyist programmer. I’ve read a few books, did a few online courses, and played around with a few tools; including everything from complete development environments like the Unity engine, to simple graphics libraries like SMFL, SDL, and Allegro. But never to a serious extent. I also have more general knowledge about programming. For example: programming paradigms like functional programming, versus procedural programming, versus object orientated programming. But since all of my projects have either been small enough that I didn’t really need to research proper code organisation for creating a maintainable codebase, or that the project was so large that it has been abandoned for other reasons before necessitating an ordered big-picture view approach to coding.

As I am now, I am even rusty at the basic programming level I once had. This is due to my interests moving on once again. I’m not sure when exactly it happened, but as I started working and acquiring money: my interests gradually drifted towards the hardware side. As I am now I am more versed in repairing devices like power-supplies and old games consoles, than I am programming using a graphics library like SDL2 (Simple Direct-media Layer), in order to do something simple like create a window and render an image onto it.

You might have also sussed out from it’s absence of mention in the above summary, that I am not an artist. Although I have dabbled with some watercolours during my poetry writing teenage phase. I even had a Deviant-art account (“night-eater”) that I used to host the handful of halfway decent things that I did managed to create at the time. But it didn’t stick. As for music, as I am writing this I don’t even understand basic music theory. For example what the notes mean: like A# or B flat. No idea. It’s embarrassing to write this down but it is also the truth.

And that’s my starting point here. A rusty ass second-rate programmer, who can barely art, and can’t music. Still though I think that this project is within my projected ability to complete. It’s like that chess adage: “the only way to get better, is by playing a better opponent”. The same I think it is with creating and refining a skill-set. It needs a specific objective or challenge to be measured against. Otherwise its easy to just end up drifting around aimlessly without ever feeling the need to work hard and develop the skills that have been acquired, at least not to a point where they can bare tangible fruit.

I know I said otherwise in the preamble above, however I do think that the scope of this project is actually relatively small (compared to what it could be). It’s just a short demo; and whose music and art only need to be functional, they don’t need to be masterpieces. Still, lets not risk falling for the Dunning-Kruger effect and underestimating the energy and time taken to develop the relevant skills. That’s a lesson I learned the hard way when making LL:TBK, where everything took three times as long to get done as initially predicted. All in all though, I do think this is doable for me.

Game Outline

Without any further ado lets outline the game that this demo will come from. Please note I will go into much further detail on the game’s mechanical substance during it’s dedicated article. This is just the declaration of the initial parameters that I wish to work within.

Game Specifications:

Working title: “Remote PI”
Genre (mechanics): (primary) point-and-click adventure, (secondary) hacking simulator
Additional mechanics: puzzle mini-games, rouge-like (permadeath and multi-run unlocks), survival (resource management)
Genre (narrative): detective thriller
Art style: medium-low resolution pixel art
Music: low key techno synths (i.e “electronic music”), or something closer to chip-tunes
Playtime (demo version): approximately 15-30 minutes
Playtime (full version): around 1 hour per complete run
Technology (for game): C++ and SDL2 using Code::Blocks IDE
Technology (for demo):
HTML, CSS, JS, and a simple text editor like Xed
Target platform (game): personal computer (Windows, Linux, Mac)
Target platform (demo): web browsers (specifically my website)

Synopsis:

Remote PI is a game in which you play as a private investigator working in a particularly seedy part of (an alternate history) London in 1999. It will have a gig type gameplay loop where the player is offered several cases via email. These discrete cases are then shown to contain an over arching narrative. One that escalates the stakes by tying the the jobs to a central story about national security.

Thoughts on game ideas

You may be wondering why I am just listing out my game’s premise here for something that one day I’d very much like to turn into a real commercial product and sell? Why I am revealing how this sausage is going to be made, secret family recipe in all? Well its because I don’t value ideas (and thus “idea guys”) very highly. They’re not something to be jealously guarded like a dragon on a pile of gold (at least not at this level). I mean I’d be willing to bet that you reading this could probably come up with a better idea for a video game. For example: Pacman but sexy. Boom! Million dollar idea right there. You just thought up Ms Pacman. Genius.

The idea of a detective noir, point-and-click/hacking-sim thriller that is set in late 90’s London is not unique or special. It’s just a disparate amalgamation of various things I like (and am familiar with) that I think will work well together. It’s the execution of the plan that dictates whether or not this product will be worth anyone’s time. Not the premise and promise provided by a “good idea”.

Thoughts on developer teams

You may have been wondering why I don’t team up with other people for these types of larger projects? I mean many hands make light work, after all. Well, in my opinion its more often than not the opposite case of too many chefs spoiling the broth.

This is especially true in cases such as where a group of friends decide to work together. Typically having vague workload expectations, job roles, schedules, and naturally differing creative visions for the final product. In such situations, I expect people to essentially waste their time and energy before the inevitable fruitless dissolution of the project. Also likely with some hurt feelings incurred in the process.

To give you a more personal and direct answer: I simply don’t like working with others on creative endeavours like video games. Especially in smaller more intimate teams, where each individual has significant influence when it comes to whether or not the project is ever actually completed. The idea of expending myself: pouring serious time and energy into a project, only for it to still fail to finish due to someone else, either not pulling their weight or spitefully sabotaging it for some interpersonal gripe — actually genuinely frightens me. This is because It’d leave me in a situation where I can’t get that time back, and I’d also have nothing to show for it.

In the case of shared revenue projects (i.e. a project where money only comes in after it is completed, if at all): people tend to start behaving oddly as the project progresses and the initial new project enthusiasm drops off. I find it is at this time that people begin to gripe about things like: creative control, division of labour, maintaining personal morale, as well as time keeping, and deadlines.

Oh and lets not forget people’s general trouble with listening to basic instructions and specifications. And quite frankly when I have worked with “creative types”, I find that I quickly can become frustrated with the development process, and risk factors associated with co-ordinating various people who say that they are invested in the completion of the project, but whose actions say otherwise. I find co-ordinating with these types of people to be a generally draining experience, closer to an exercise in herding cats than co-ordinating with professionals.

This feeling is exacerbated when these people then choose to organise themselves into flat leader-less structures. I believe this is due to the incorrect assumption that such a small group will not need a leader to organise and co-ordinate the membership, especially when they can all join the same chat group and co-ordinate that way. Like a group of friends might do for example. This type of headless group structure is generally bad unless everyone in the group genuinely holds themselves responsible for making their contributions on time and on par. Even with the absence of the downward pressure and authoritative structure a clear leader would offer.

For example: I have had experiences with creative partners who would only work on things as they feel like it. On one hand I understand creative desire and that sometimes it can dry up. However when working to deadlines it really can become frustrating, especially when they hold up the supply lines by making others wait on them for materials. And since no-one is currently paying them for their contributions, they don’t take their responsibility to the group seriously and prioritise it within the routine of their lives appropriately.

I once had a partner who decided to create a superfluous website and start writing a script for a sequel to the title we were working on at the time. This was at a time when the actual current title was about halfway done, and I was waiting on them for materials. That’s got on my nerves, and the worst part is that I had to find the nicest way to ask the person to go back to making what we actually need; rather than what the person felt like doing. I was also acutely aware of the real chance that my criticism my disenfranchise the person into quitting all together, if I happen to upset them. In which case that project is dead in the water. That tiptoeing around people’s feelings is honestly exhausting.

I could gas on about this, but it ultimately comes down to me not wanting to work with others for this type of thing. Things just have a habit of getting complicated when working on creative (and potentially commercial) endeavours with others. Who owns what and how much is also a headache of a conversation (argument) to have, especially when the group can end up fighting over scraps or (fantastical) projected earnings.

That’s as far my experience with shared revenue type deals go. I have looked at alternatives such as hiring artists and musicians. However I don’t really want to work with freelancers I meet online. I may for discrete commissions of work, such as for character profile pictures or wallpapers for a game. However I don’t really want to get into in more extended business relationship with freelances if I can avoid it. As I fear being nickel-and-dimed for every alteration or modification I may request.

I can’t say I’d blame them for having a mercenary attitude in this case either. They make their money during the development process, and strictly in exchange for their work. It’s not like I’d be willing to share revenue or copyrights of the finished game after-all. So the only way they make money is by charging for everything that they do; including time working on alterations.

Additionally, I am not in position in my life right now where I can invest the time necessary to source and vet good contractors; research contract and copyright law to the point of competency, as I want to legally own the commissioned work; and finally (and most importantly) cough of the dough to pay for all this. So its all academic at the end of the day.

Closing thoughts

As I write this I ask myself: What is the purpose of this blog? Given a moment to think – I believe it’s main purpose is as a public declaration of intent. (Not that there’s anyone actually reading this mind you.) I have had many similar projects in the past, one’s that I have quietly started, worked on for a couple of months, then just as quietly abandoned them. I always justified the abandonment with one reason or another. I am sure that at the time, they were good reasons and not just excuses. However I can not deny a certain emergent pattern of behaviour of mine: I have started many projects but only really have ever completed a small handful. And all the effort that went into those unfinished works is largely lost, not having paid me back any sort of tangible dividends.

The only exception is probably some skill building experience acquired whilst creating and prototyping new systems and mechanics. I have many unfinished games that have their core mechanics articulated to satisfaction but not fleshed out with actual game content materials. Like a functioning inventory system filled with placeholder articles, and having all it’s item population and depopulation event calls coming from one central controller function. So that I could test and trigger the events at will. (Needless to say: this event controller function is to be removed once the system is placed into an actual game, and has all it’s event calls tied to in-game trigger events.)

Additionally, I have created systems like dialogue trees, local saving and loading, and 2D weather effects. Experience modelling these mechanics does in my opinion carry over. Even when tools like engines and languages change, understanding the fundamental principals of how something like saving a game state to file works, and actually having an instance of implementing it, is valuable in my opinion. Although nothing beats the grim satisfaction of actually finishing a fucking project.

Anyhow, here’s to this thing. *downs an alcoholic beverage* burp… cheers.

Thanks for letting me ramble.

#0027: Dominions 4: Thrones of Ascension map creation guide

#0027: Dominions 4: Thrones of Ascension map creation guide

Preamble

This is a quick bullet point style guide to how I create maps for IllWinter’s game: “Dominions 4: Thrones of Ascension”. Note: I will henceforth refer to the Dominions 4 game binary as “dom4”.

Map creation steps

Step 1: acquire or create a base image to use as the map.

I created an image using the GIMP image editor, and saved the multilayer project file (.xcf).

Step 2: use an image editor program to alter/create an image to conform to a standard that the dom4 executable can recognise.

Image specifics:

  • image must be in the Truevision TGA (.tga) or Silicon Graphics Image (.rgb) image format
  • minimum image size: 256×256 pixels
  • recommended image size: 1600×1200 pixels
  • image must be in 24 bit or 32 bit colour depth
  • image must be either uncompressed or using run-length encoded (RLE) compression
  • image must use a white single pixel as a province marker
  • white = HEX:#ffffff, RGB:(255,255,255)

Step 3: using the image editor, export a correctly formatted image.

I exported a TGA image file to specification, with a size of 256×256.

Example image file name: “tutorial_map_256x256.tga”.

Step 4: copy that image file to the dom4 working directory.

Working folder directory:

  • Linux: ~/.dominions4/
  • Mac: ~/.dominions4/
  • Windows: %APPDATA%\dominions4\

Step 5: boot up the dom4 game, and navigate to the game’s internal map editor.

Menu navigation: Game Tools > Map Editor > New map > (enter image file name)

This will automatically create a map. By populating it with provinces (one at each white pixel), then guess at the links between the different provinces.

Step 6: use the in game map editor to edit the map data.

This includes: mapping links between provinces, formatting link types (mountain, river, ground); terrain types (forest, farm, sea); throne sites, special sites, and so fourth. Its important at this stage to generate random names for each province, this is in order to populate within the .map file relevant code for the province names.

Example: #landname 1 “Deepmount”

Step 7: save the map. This will generate a .map file in your dom4 working folder.

Step 8: using a text editor: edit the .map file data such as the in-game map name, map description, and province names.

Step 9: test play the map, and troubleshoot errors.

If the dom4 program crashes as soon as you load up the map. Then it is indicative of a hard fault. For example, the tutorial map here crashes if you try to add more than 4 nations to it. Other crashes like this can be caused by the dom4 binary being unable to find a resource. E.g. the user renamed/edited resource files, then tried to load into a game that uses them.

Step 10: backup and archive your map

Example archive contents:

  • GIMP project file (.xcf)
  • original image file (.png)
  • processed image file (.tga)
  • map source code (.map)

Map files

Closing thoughts

This method is probably the simplest way to actually create decent playable maps. By using the built in map creator tool, a user can bypass a lot of the tedium involved in coding. Such as learning the meanings of various symbols used for variables and game function, or learning the specific syntax used to generate a viable .map file. This is because the in-game visual editor automatically populates the .map file with viable code. All the user has to do is use the visual editor to assign all the functionality that they require for each province by checking menu boxes. Save the map. Then circle back to the .map file to fill in a couple of titles (in game text fields) after that.

Ultimately, I think that this quick guide provides a good framework. One that an aspirant map creator can build on further by reading IllWinter’s map making manual. Which in turn will allow them to gain further insight into the mechanics of map making; including the ability for more advanced functionality. In the mean time, just these simple steps will enable users to create perfectly playable maps. Ones that should satisfy most.

Thank you for reading.

Links, references, further reading

http://www.illwinter.com/dom4/manual_mapedit.pdf <<< MAP MANUAL
http://www.illwinter.com/dom4/index.html
https://github.com/Narroe/Dominions_4_custom_maps
https://en.wikipedia.org/wiki/Truevision_TGA
https://en.wikipedia.org/wiki/Silicon_Graphics_Image
https://en.wikipedia.org/wiki/Run-length_encoding

#0018: Creating and utilising QR Codes

#0018: Creating and utilising QR Codes

image of a QR code

What is a QR code?

QR codes are a type of barcode. Barcodes are a visual representation of digital (or binary) information. They are designed to be easily understandable by machines. Barcodes enable machines to do useful operations that involve interacting with the physical objects, that the barcodes are placed on. Such as sorting or counting large volumes of items accurately. For example, with packages at a mail depot, or product inventory at an automated warehouse or factory.

A QR code (or Quick Response Code), is a type of 2 dimensional barcode, otherwise known as a matrix barcode. All this means is that it represents it’s binary information visually across two axis (x and y). It does this by plotting black (1) and white (0) squares on a grid. Matrix barcodes are an evolution on the iconic one dimensional barcodes; which represent their binary information in a single array of black lines and white spaces (i.e. columns), that denote their ones and zeros respectively.

It should be noted that there are numerous different types of barcodes (both 1D and 2D) in use today. Each one is specialised to their specific applications. These specialisations manifest themselves with variations in visual design (e.g. with markers for orienting scanners with different protocols); barcode size, reflecting how much data they need to represent/encode; and data encoding ability; i.e. what type of data the barcode image represents (typically: numbers, or ASCII symbols).

The most obvious difference between one dimensional (array) and two dimensional (matrix) barcodes, is the addition of a Y dimension of information. This addition allows for a greater density of information to be stored, however it also requires more sophisticated tooling to actually read the data from the barcodes themselves. The most notable hardware difference in this regard is that one dimensional barcodes use a simple laser line scanner, whereas matrix barcodes require a camera module. Because of this, you typically can not read any matrix barcodes with hardware designed for 1D barcodes, however in many cases (providing that the software allows for it), you can use a matrix barcode scanner (e.g. smart phone) to read information from 1D barcodes.

Additionally, I make assumption that on a barcode: black denotes a one in binary and white a zero. Whereas in reality, it really doesn’t matter. This is because the interpretation of the barcode is all up to the protocol standard that it is using. For example with the case of a UPC-A type barcode, where it uses a 7 bit array to denote numbers. Different value bit arrays can symbolise the same (base 10) number depending on it’s location on the barcode.

image of a UPC-A barcode
UPC-A barcode image taken from wikipedia.org
table taken from wikipedia.org
Example of a 1D barcode in use as store’s inventory identifier

Consumer uses for QR Codes

I’ll limit this discussion to consumer use cases and applications because industrial applications are rather dry. They use QR codes in the same way they use other barcodes. Which is to orientate machines that operate with and around physical goods. This could include use cases such as at an Amazon sorting depot, barcodes are used to inform the sorting machines of what goods are in what shelves.

Where as within the consumer space; most QR codes in the wild, are simply used as a means of storing website links and affiliate information. These are designed to allow people to simply scan the code out of a magazine, business card, coupon, or what-have-you; in order to very quickly load the website hyperlink and/or fill a virtual document.

For example, a QR code at a public WIFI access point will have all the data necessary (link to login page, SSID, password) to allow the scanning smartphone to access their network. Likewise a QR code on a coupon will link to the retailer’s online store page and pass any promotional offers associated with that coupon automatically to it’s e-shop.

QR codes, at least when dealing within the consumer space, are predominantly a means of convenience. They reduce the friction encountered when user’s operate within virtual spaces. Friction such as inputting long arbitrary names or numbers (such as a WIFI network’s SSID or password); or website domain names, where a typo could expose the user to a potentially malicious imitator website.

The friction is reduced because the process of scanning a code with a modern smartphone is far easier, than inputting the data manually using (most likely) a touch keyboard into that same smartphone. Another benefit is the probability of user error (such as mistyping a password or domain name) is eliminated. This is done by automating the process of data entry and bypassing the user in that work flow. That’s what I believe constitutes the vast majority of useful applications of QR codes in the consumer space, at least this is the case when talking strictly about static QR codes.

Additionally. The Wikipedia article for QR codes list many different (specific) use cases for them in the the consumer space; however in my opinion they all boil down to the two things I mentioned earlier: following links and filling in virtual documents.

Anatomy of QR codes

The smallest unit of information on a QR code is referred to as a module. A module is a single square that is coloured either white or black. For example a version 1 QR code is made up of a 21 by 21 module grid, totalling 441 individual modules. If you were to count all the chequered squares along either axis, it will add up to 21.

Look at the below two examples, both of these QR codes are identical Version 1 QR codes. They both have the same 441 distinct modules on a 21 x 21 grid. The only difference is the actual image size. This should illustrate that (within reason) the actual pixel (or print) size of the modules doesn’t matter with QR codes. As long as the scanning devices’ cameras can fit the entire code structure within frame and focus.

Bash instructions:

qrencode -o qrc_V1_small.png -s 3 'ECC'
qrencode -o qrc_V1_large.png -s 6 'ECC'

A QR codes’ modules are organised into several structures. These include: three position markers, several alignment markers (number varies with version/size), a version information zone, a format information zone, a timing zone, an area for data and error correction keys, and finally a blank quiet zone to denote the border of a QR code.

image taken from wikipedia.org

QR code size specification

QR codes are rather versatile, they have the ability to encode: ASCII symbols (letters and numbers), media (images, sound, and video), as well as even executable programs (compiled binaries). However, although they technically have this capability, it is severely hampered by the size limitations of the QR code standard.

At the time of writing, the largest viable QR code that can be created is the ‘Version 40’ variant. QR code Version 40 can encode up to: 7089 bytes of pure numerics, 4296 bytes of alphanumerics, 2953 bytes of miscellaneous binary (e.g. media), or 1817 bytes of Japanese Kanji characters. This is done using the lowest value (level L) of error correction, thus leaving more space for actual data. So these are the absolute maximum values.

Refer to file “QRcode_version_table” below for a full list of all version specifications.

image taken from wikipedia.org

Test case of numerical barcode capacity

This was an interesting one, because although the version 40 QR code specification states that I could create a numerical QR code with a capacity of 7089 bytes. In actuality the maximum amount of data that I managed to fit into a QR code was 7080 bytes. I honestly don’t know what to make of that. The ECC (level L) was supposed to be factored into the 7089 max value, i.e that is the maximum data storage in addition to the space that the error correction takes up. So it can’t be what’s limiting me from the max value. As to what is happening, I’m not sure. It could be anything, including a limitation of the program I used to create these codes (qrencode), or it could be some unknown setting, text file associated metadata, or even some unnoticed human error in play. Hence I will include the input files I used here so that you can try it out yourself, and see where I messed up.

Bash instructions:

qrencode -r pi_decimals_7081.txt -o qrc_pi_7081.png
Failed to encode the input data: Input data too large

qrencode -r pi_decimals_7080.txt -o qrc_pi_7080.png
Successfully compiled QR code containing 7080 bytes of pure numbers
According to Xed, this file weighs 7080 bytes.

QR code Error Correction Capability

QR codes have a built in Error Correction Capability (ECC). They use the Reed-soloman error correction codes in order to facilitate a certain level of data redundancy. This enables QR codes to be readable even after they have sustained damage. Such as by getting scratched or being partially obscured by grime. This error correction facility comes in four levels: L, M, Q, and H. The error correction of each level is expressed as a percentage of the total data that can be lost whilst maintaining the QR codes readability.

This is as follows:

  • L has up to a 7% ECC
  • M has up to a 15% ECC
  • Q has up to a 25% ECC
  • H has up to a 30% ECC

Generally speaking, the error correction capability of a QR code isn’t free. The higher levels take up more of the QR code’s finite available space, space that could otherwise be used to encode more of the actual substantive data itself. This trade off between useable storage space and data read reliability, means that QR codes with higher ECC tend to be used in environments where code damage is more likely; or in applications where the printed QR code itself is going to in active operation for a longer time period. Such being attached to warehouse racking to identify the specific shelf location and product contents to an automated sorter machine.

image taken from archive.org – qrcode.com page

How to spot the ECC level on a QR code

To work out what type of ECC level set on a QR code, look towards the “format information” zone to the right of the lower left position marker. Immediately after the single column of blank (white) modules of the position marker, at the bottom of the QR code and the module just above it. These two modules display what level of ECC is employed within the QR code.

Bash instructions:

qrencode -l L -o qrc_ECC_L.png 'ECC'
qrencode -l M -o qrc_ECC_M.png 'ECC'
qrencode -l Q -o qrc_ECC_Q.png 'ECC'
qrencode -l H -o qre_ECC_H.png 'ECC'

zbarimg *.png

Using ECC to incorporate logos into QR codes

When I looked into how people actually created the fancier QR codes; the ones’ that incorporate graphics such as text and logos. I was genuinely surprised at the crudity of the methodology. I thought that it may involve something akin to passing arguments to the QR code generator to re-route the data sectors around the graphic. Nope. It’s actually laughably simple. If you wish to incorporate graphics into your QR code, just crank up the ECC to max. Output the QR code. Then slap that graphic on top the QR code using some graphics manipulation software. In this case GIMP. Done. I mean it does the job. I just don’t like the idea of purposefully damaging data integrity.

Look at the below examples. Both of these QR codes hold the same data. This being a link to this website’s homepage. The size disparity between these two is caused by the additional error correction code added to the level H QR code. It caused it to jump up a couple of versions. I’m guessing that since my logo covers more than 7% of each of the QR codes, this is the reason why the level L QR code is no longer functional, whereas the level H QR code continues to function after the addition of the logo due to the logo covering less than the 30% of it’s maximum error correction capacity. This methodology seems rather crude, but it works.

Example of a real QR code that incorporates a logo

Encoding and decoding QR codes

There are numerous ways to generate your own QR codes. I’ll just mention a few to give you an idea of where to start.

Firstly, I should mention that there are paid services that allow customers to get custom QR codes. This includes things like “dynamic” QR codes, that can supposedly keep a tally of the number of times they have been scanned. However if you aren’t using them for professional applications. I generally wouldn’t recommend using paid services like these. This is a tinkerer’s blog after all. That being said, I mention them here just to make you aware of their existence.

All the tools that I mention from this point are free to use and readily available. If you are using a Linux based operating system, or virtual machine, or use bash in Windows. I recommend downloading and using two programs: “qrencode” and “zbar-tools”. Both are available in the Ubuntu main repository. This is actually what I primarily used to create and test all the codes in this article.

The zbar-tools toolkit comes with two relevant programs. zbarimg and zbarcam. zbarimg is used to scan local images of QR codes and can output that data either to standard output (usually the shell), or piped into a file. zbarcam has similar functionality, except it can use the computer’s camera to capture a QR code.

Alternatively, for quick QR code generation you could use an online website. I don’t want to recommend any for liability reasons, as there are lots of random websites that can do this. What I found interesting though is that the search engine duckduckgo.com actually can generate QR codes as well. I came across it rather by accident, as it returned QR codes with the search term data within it.

To use DDG to create a QR code, just type in “qrcode” followed by a space and then whatever textual data you want in it. I find this method rather novel, and probably good for simple quick and dirty codes. However it lacks the precision of functionality that a program like qrencode gives you. Such as the ability to specify the module size, and ECC level of the generated QR code; qrencode does this by passing arguments to the program.

TLDR

Tools to create QR codes

  • Linux: qrencode
  • websites: duckduckgo.com, etcetera
  • paid services for business including dynamic QR codes

Tools to decode/scan QR codes

  • android phone apps
  • (zbar-tools) zbarimg & zbarwebcam

Encoding and decoding binary QR codes

To encode a file (such as an image) into a QR code, you need to pass an additional option to qrencode. Use the -8 argument to specify 8 bit mode. You also need to use the -r option to specify a file input.

qrencode -8 -r input_image.png -o output_qrcode.png

To decode a QR code image of a binary into a file. You have to pass the below options to zbarimg, to allow it to know that it is dealing with a binary file. Additionally you then need to pass the output from zbarimg into a file. Otherwise it’ll just output the data into standard output (i.e. the shell itself).

zbarimg --raw --oneshot -Sbinary qr_code.png > output_file.png

One thing to note. Versions of zbarimg (zbar-tools) older than version zbar-0.23.1 do not have the capability to decode binary files. Additionally, at the time of writing the version of zbar within the Ubuntu repository was zbar-0.23; an older version without the capacity to decode binaries.

Alternatively decoding a binary QR code without specifying the output format, or without piping the contents into a file; instead just letting it output into the shell: will cast the contents as ASCII text. Instead of a file it will output a string of nonsense characters. However the first couple of legible characters should consist of file metadata that can tell you what format the binary data is in. The below example is the same QR code as above but outputted as ASCII rather than as a binary.

QR-Code:‰PNG

\00\00\00
IHDR\00\00\00Œ\00\00\00Œ\00\00\00!¢Öi\00\00IDATxÚíݱmƒ@†á8rÏ\00´i#s¤af`7™Ã‹¸e\006Hé”|`r<oé\ù^}÷c[>õ}ÿ‚}ójHBÎNJ¢Xã?5M“ÙÞµm»øwÿt•‘$u‡´uZîýíclõõö9»f–ªª2ÛÄ»v;¦÷$IÝa¥Óݑ©ëzvÍårIؐ1½'Iêê.§“áDïI’ºƒº;HïI’ºI$$DHI$$’@H"	$$’@H"	$oéúE’7ãK’ºCuÞe>—ϵÍmïùLª$©;ü—ÓÝfI‡$‘’HI ‰$’HIG%Ù+³^¼ÈžTw8%IÝauZ®ë:»yGY–IzO’ÔH"	$aûa6œd I$!ïº3Ì®w	$u’HI0ÌJH‚aÖ0uGHINw†YIIêÎ0k˜I$$’`˜…$‘ìaV’ÔH"	$Á0+I 	†YÃ,ÔH"	$‘’@I 	$‘„58ö™OßýÞ/[¿ÁeI’¤gh:Ã5cëcþš$Iâ®aO“4ñåb©¾cëh,ÛRIzô¶AÂ\“³ Éé.[¾¯.ñw¦×ÿüéJӒ$IÒîó”j½;’4Ž‰59˶T’Á‘‚Sß÷wEa_žÈЈ$©;¬Tw$”#_{G}¨õ—´\00\00\00\00IEND®B`‚

Closing thoughts

QR codes are basically ubiquitous today. They are present everywhere. And applicable in many use cases ranging from anything; from storing simple web links, to gaming uses such as markers for augmented reality. I recently found a tiny one that was printed into the underside of my kettle. Why is it there? Just because. The point is that they are everywhere.

Additionally, the technology is exceptionally accessible. Having a surprisingly small learning curve. This coupled with the fact that quality creation tools (like qrencode) are freely available, means that if you for whatever reason would like to use a QR code within your work; there is little reason not to do so.

If I am perfectly honest however. I personally don’t see much actual utility for me other than inserting a QR code linking to my website, or email within a business card. Perhaps, even in printed articles – as a means of linking in additional online resources. It’ll certainly have a lower level of friction, than printed hyperlinks have.

If I’m honest. The main real appeal of this technology to me is the novelty of storing binary information within a printable medium. The fact that you can encode an actual binary (so long as it is smaller than 2.9kb) into a paper medium, is wondrous. Check the links below for a Youtube video of a person who encoded an entire executable game into a static QR code. Not a web link to game. The game itself. Imagine storing actual games, into a book. Tiny games, but still. Cool.

Anyway, happy QR coding. Thank you for reading.

Some fun

Links, references, and further reading

https://www.youtube.com/watch?v=ExwqNreocpg

Scan Barcode QR Code From Webcam & Image File in Linux

The purpose of QR Codes

QR Codes basics

https://en.wikipedia.org/wiki/QR_code#storage

https://en.wikipedia.org/wiki/Binary_number

http://qrcode.meetheed.com/question14.php?s=s

https://www.linux-magazine.com/Online/Features/Generating-QR-Codes-in-Linux

https://www.binaryhexconverter.com/ascii-text-to-binary-converter

https://en.wikipedia.org/wiki/EAN-8

https://en.wikipedia.org/wiki/Universal_Product_Code#Check_digit_calculation

https://en.wikipedia.org/wiki/Barcode#Symbologies

https://en.wikipedia.org/wiki/Binary_number

How To Convert Images Into ASCII Format In Linux

http://www.qrcode.com/

https://web.archive.org/web/20130127052927/

http://www.qrcode.com/en/qrgene2.html

https://www.thonky.com/qr-code-tutorial/error-correction-table

https://scanova.io/blog/blog/2018/07/26/qr-code-error-correction/

https://www.qrcode-tiger.com/qr-code-error-correctionhttps://en.wikipedia.org/wiki/Reed%E2%80%93Solomon_error_correction

https://medium.com/@r00__/decoding-a-broken-qr-code-39fc3473a034

#0016: Software recommendation: Firefox Monitor and haveibeenpwned?

#0016: Software recommendation: Firefox Monitor and haveibeenpwned?

https://monitor.firefox.com/

https://haveibeenpwned.com/

Preamble

In a bid to make more immediately useful content, I’d like to start recommending some of the various tools that I use. In this case it is an online service. Namely Mozilla’s Firefox Monitor; or more to the point, it is actually the website: haveibeenpwned.com (HIBP), which Firefox Monitor uses to enable it’s service.

What do they do?

In essence Firefox Monitor and HIBP are used to check whether or not an email address is associated with a recorded data-breach. Keyword: “recorded”. It does this by using a database of known breaches provided by haveibeenpwned.com.

The purpose of this service is to allow people to ascertain whether or not, an online account (and the user information there in) associated with the email address: has been compromised in a known data breach; and thus in need of immediate remedy. Things like: changing passwords, recovery phrases, and generally being aware that any potentially sensitive information associated with that account, such as: full name, mother’s maiden name, GPS location, education, birth date, telephone, city, school, or business information has now circulated within the hacker community.

Additionally, it helps to know which company is to blame for the spike in volume of spam and phishing emails, that will most certainly accompany said breach. I don’t know about yourself, but that’s something I’d certainly like to know.

Why is this service important?

It is my belief that every solution begins with awareness, the awareness of the problem. Only then can we move to better the situation. This tool gives you exactly that.

In my opinion, the main reason why I think this tool is important is because the companies involved in the data breaches themselves are loath to make their customers aware of them. Even though it is in their user’s best interests; it is not in the businesses best interests to advertise any breaches beyond the legally mandated/enforced minimum. Furthermore, who knows what that actually even is when dealing with global or multinational companies that operate over many legal jurisdictions. This is especially true when dealing with larger companies with entire legal teams at their disposal.

This service is important because (still just my opinion): companies in general tend to quietly patch any security vulnerabilities as they find them, and move on hoping no-one has noticed. This is especially true when there is no internally confirmed security breach.

Whenever a confirmed breach does happen, the first thing that the company responsible does is downplay the scope and severity of it. This may (and probably does) include: not even publicly reporting the breach until it is already made public elsewhere, often at a much later time. In many cases there is even resistance to acknowledge fault after the breach is made public. This is most likely a bid to exonerate themselves of any potential legal liabilities involved.

At the very least acknowledgement of fault could be seen as weakness. Weakness that will shake public confidence in the company and/or service. Therefore it is in their best interest to maintain the general illusion of control and/or competence. It’s corporate PR 101. It’s just a shame that the company and it’s users’ interests don’t align within this circumstance.

Why should people use these tools?

Both Mozilla Firefox Monitor and HIBP are free to use publicly available tools. Both tools come from reasonably trusted sources. Firefox Monitor is the product of an open-source community driven effort, giving it a certain level of transparency. And HIBP was developed by Troy Hunt, an authority on the topic of digital security. Even if you don’t know who Mr Hunt is (and I didn’t prior to this post), the fact that the Mozilla team decided to use his HIBP database for Firefox Monitor means that they are vouching for it.

More importantly, the tools themselves can assist an individual with regards to protecting their personal information online. They do this by allowing the individual that exact thing that I mentioned earlier: awareness. Awareness of whether or not that person’s email associated account information has been circulated, and which company is at fault for it.

For example: if you used the tool and because of it now know that, an account associated with your email with company X has been breached; and along with that breach your “security questions” were revealed. Then now you know to both remove, and not to use those particular security questions, with any future account … ever. As they are basically permanently compromised. Forewarned is forearmed.

taken from https://github.com/mozilla/blurts-server

Difference between Firefox Monitor and haveibeenpwned?

Firefox monitor is a very slimlined version of the HIBP tool that gives the lay user just what they need, without overwhelming or putting off said lay user. It is rather idiot proof; merely requiring user’s to input their emails and press enter. That’s it. Firefox monitor also has been bundled in with a few basic articles on good security protocol, that may be helpful to the average user. Common sense stuff a lot of it, but you know what they say about common sense.

Although Firefox is the simpler tool to use, it must be said that HIBP is a far more robust tool. And the one that I recommend. This is because in addition to searching email addresses, it allows searching via: passwords, and domain names. The website also allows users to browse a catalogue of breached websites without running a search. Extracts below.

Ever wondered how many accounts have been breached because they used the password “love”? Wonder no more. According to HIBP, its 356006 times.

I have also perused a nice little selection of companies from HIBP’s catalogue of known breaches that you may find interesting.

Personal experience with a data breach.

Just an aside if anyone is interested. From reading the above “Why is this service important?” section, you might have gotten the idea that I may be ever so slightly cynical about the companies involved in security breaches like these.

Frankly speaking, whenever data breaches do happen, I do not consider the corporations involved to be “victims” of cybercrime, as many others seem to do. It is a nauseating sentiment. One that condones bad behaviour. This is because it is my personal belief that the vase majority of the cases are due to one core thing: a dereliction of duty. Them failing in their duty to protect the data that they collected. Little more.

In addition to consuming the various news articles about data breaches over the years. Ones that had the general themes of corporate incompetence. Like for example: employees carrying around sensitive data on unencrypted thumb-drives, only to lose them on the train. I also have a few examples of companies that leaked my very own personal information. All of this has coloured my opinions thus.

The most memorable is the online virtual tabletop gaming website roll20.net. The thing that rubbed me the wrong way about them is that at no point during the process did they ever take any accountability for allowing it to happen. They did eventually outline what information was taken, but they never offered an apology for their lax in security. Instead they covered it up with boiler plate (legal friendly) corporate speak.

Example: “The investigation identified several possible vectors of attack that have since been remedied. Best practices at Roll20 for communications and credential cycling have been updated, with several code library updates completed and more in development.” Assuming that is indeed true, the same could literally be said by any company involved in a similar data breach – just change the names.

Although from what I understand by reading the article that they linked in their post, technically (purely technically) this appears as though it’s not their fault. But rather it was due to the underlying technology that they used. At least that is the implication presented. I’d argue that they still made the decision to use said tech, and thus vouched for it by doing so. Making them responsible, at least tangentially. At least enough for a simple sorry. The closest their customers got to an apology was a “Frankly, this sucks.” Writing it in an official company blog post that they passed for a conclusive public report; authored by Jeffrey Lamb, the Data Protection Officer.

I remember thinking at the time that whoever was writing this was good at the bland formalities of corporate speak, but otherwise is (and excuse my French): a fucking dickhead. You have to keep in mind reader, that they only knew of their own data breach because of a third party report. One that was published months after the fact. The report was published in February of 2019, and the breach happened (according to Mr Lamb) sometime late 2018. No apology warranted, not even for missing the hack, until a third party told you about it months after the fact. They then go on write their conclusive report in august of 2019. So nearly a year, between data breach and the final public debrief, where they outline exactly what data was exposed. I call that incompetence. “Data Protection Officer” more like resident salary sucker.

The ultimate lack of accountability is what really rubbed me up the wrong way here. And why would they be accountable, there is little in the way of consequence it seems for these messes. There are even examples of customers defending roll20 in the comments, referring to them as “victims” of cybercrime. They aren’t the victims here idiot, you are! I’ll include some choice examples of this for your entertainment. Its customers like that, that make businesses feels like they don’t have to be accountable either for their actions, or in this case general inaction with regards to proactively protecting customer data. Please read through the example comment thread.

You really can’t reason with people like that. They have too much emotional stock in a corporation to admit to themselves that they got screwed by it. There were even people actually praising roll20 for it’s meagre efforts. A sum total of 2 blog posts, some notice tweets/emails, and for patching a hole in their own boat. Thanks roll20, stellar job. Shame about all my cargo sinking to the seafloor for the bottom feeders to enjoy. I mean you only lost my full name, my IP address (so my physical location), my password, oh and some of my credit card data. Don’t worry about that roll20 (not like you would), that’s my problem. Fuck those types of customers. Wankers.

Moving on. Another example of a gormless entity losing my data is ffshrine.org. A final fantasy fan site that I registered with in 2010 I believe; and haven’t used that account since 2010. Ideally, they would have flagged the account as non-active and deleted it after a couple of years. But alas, instead they just kept whatever details I gave them for the five years until their 2015 data breach. Where they lost subscriber passwords and email addresses. No warning email post event, nothing. Radio silent. I had a similar experience with tumblr back in the day. Radio silent. No accountability. Are you sensing a theme here, dear reader?

Closing thoughts.

I have written far more here then I initially wanted to, so I will keep this summary short. Tools like haveibeenpwned and Firefox Monitor are things that you as an individual can use to help protect yourself in cyberspace. They can help you take proactive measures to safeguard your own data. They can also show you evidence that the large corporations really aren’t as professional or as infallible as they like to appear.

And that when, they make mistakes; mistakes such as losing your data. It is often you that has to bare the brunt of the repercussions, with little if any repercussions to them. Maybe they incur a temporary stock dip. But the fact of the matter is, they’ll recover from it. However whatever data you provided them for safe keeping, well that’s now permanently out there. Enjoy.

For example. To this day I still get phishing emails that say something like: “hey MY_FULL_NAME, YOUR_BANK has detected multiple login attempts using PASSWORD_FROM_FFSHRINE.ORG to login. We have frozen your account because we suspect fraudulent activity. Follow the obviously dodgy link provided and give us your security questions to fix this.” Although I can recognise a phishing scam when I see one, many technology illiterate users can not.

And make no mistake, the companies that were lax in their security. The one’s that have the attitude that breaches happen; are the exact ones to blame for the perpetuation of the black market information economy. An economy that preys on people; the real victims. The people who trusted these corporations with their data, thinking it in safe hands. Not the corporations themselves whose lack of diligence and general incompetence allowed for the data that they were trusted with to be exposed.

Jeez… that got a bit preachy towards the end. Didn’t it? Sorry about that. It’s just seeing companies fobbing off their responsibilities, and then seeing customers with Stockholm syndrome defending these same companies against criticism – really ruffles my feathers.

Anyway, thanks for reading.

References, links, further reading.

https://github.com/mozilla/blurts-server

https://monitor.firefox.com/

https://monitor.firefox.com/breaches

https://monitor.firefox.com/security-tips

https://haveibeenpwned.com/

https://haveibeenpwned.com/About

https://feeds.feedburner.com/HaveIBeenPwnedLatestBreaches

https://blog.roll20.net/post/182811484420/roll20-security-breach

https://blog.roll20.net/post/186963124325/conclusion-of-2018-data-breach-investigation

Hacker who stole 620 million records strikes again, stealing 127 million more

#0012: A personal perspective on the end of the Adobe Flash era

#0012: A personal perspective on the end of the Adobe Flash era

What is Flash?

Adobe Flash is a multimedia software platform. However the term “Flash” is used as a catch all term for any software that utilizes this platform to output media. The Flash technology allowed webpages to host media on them in the form of embedded files; encoded into either the Flash Video (.FLV) or Shockwave Flash (.SWF) format. In order to run these media files, one required the use of a web browser plugin called the “Adobe Flash Player”.

Flash first became prevalent in the late 90’s, early 2000’s as an evolution on the static webpages of the time. Where it was basically used for every animated media webpage element (barring animated .gif images). Including (but not limited to) delivering: video, music, and animated advertisements (e.g. banner ads with audio). It also allowed for interactive media, namely internet browser games. It did this by allowing the browser media access to user’s system inputs (e.g. keyboard and mouse).

In essence if you browsed the internet during this time and interacted with web media. Be it watching a video, playing a game, listening to music, or watching text scroll on a fancy ad. Chances are good that it was delivered to you using the Adobe Flash web player. It can not be understated, that Flash was ubiquitous in it’s time.

The decline of Adobe Flash

Ever since Adobe’s 2017 announcement of the official retirement of their Flash technology platform, for the close of the year 2020; the internet has been abuzz with people both decrying and celebrating the end of the Flash era. Even within the years leading up to Adobe’s 2017 announcement, I have seen Flash’s relevance steadily drop off as the emergence of alternatives such as HTML5, ate up Flash’s market-share.

There are number of reasons as to why people migrated from using Adobe’s Flash player to HTML5 for delivering webpage media. Predominantly, this is because of the public’s perception of the Flash web player having numerous security issues. Whether this is true or not, or simply exaggerated; I can not much comment. What I do know as a former Flash game dev is that you can make whatever network calls you want, as well as access the user’s local system; in order to get at anything from keystrokes to cookies. Then wrap up that code into a fairly innocuous looking .swf file, and embed it into a webpage. Hmmm… I guess it did have security issues. But having said that I am not going to pretend to be in the know, when it come to network security of the 2010’s and Flash. I only used it to make browser games. The network calls to allow online game saves, the cookies for local game saves, and keystrokes for user input.

Flash’s security issues in my opinion were compounded by it’s proprietary and closed source nature. There wasn’t an easy way to check what that embedded .swf file was going to do until it did it. Consequently it made malicious code very difficult to find. HTML5 on the other hand doesn’t suffer from this issue, since it is largely open source. Although most developers these days use APIs (Application Programming Interfaces) or frameworks, such as AngularJS or JQuery to make sites, or Phaser to make games. I.e. they don’t code them from scratch; the code for these tools however is there and public facing if you are inclined to look.

The openness of using HTML5 (Javascript, CSS) based technology means that it is inherently safer than Flash. Since the browser acts as an interpreter for the source code, rather than just a medium for embedded compiled binaries (.swf files) where code can hide. I don’t generally like speaking in absolutes, because there are undoubtedly exceptions to this. However in a general sense this is the case when it comes to Flash and HTML5.

Additionally Flash had another issue and that was it’s lack of support for mobile devices; as it was designed for the prevalent desktop platform at the time (~2008). This was in my opinion the death knell for Flash, as the explosion in popularity that internet capable mobile devices had between then and now (2020); just meant that Flash’s market-share essentially shrank proportionally to this.

Steve Jobs (CEO) of Apple published in 2010 an open letter called “thoughts on Flash”, where he cites a number of reasons as to why his company does not use the Flash technology in their products. Namely their iPhones and iPads of the time. His reasons included: Flash’s security issues, it’s closed source nature, it’s lack of optimisation for touch interfaces, it’s negative affect on battery life, and more. Link below for the full article.

hyperlink: #0011: Copy-paste of “Thoughts on Flash” by Steve Jobs (2010)

Considering what the Apple CEO thought of Flash, it was no surprise that Apple was the first company that outright disallowed Flash from their platform (iOS). This meant that at the time, Flash was only supported on Android devices; and even that dropped off in the later years since the issues that Jobs cited weren’t addressed to a satisfactory degree. In hindsight, it appears as Apple’s dismissal of Flash was a canary in a coal mine. A prelude to it’s collapse in general.

I should mention that I am not really here to talk about Flash’s history. I am not a Flash scholar, nor am I interested enough in it to become one. So take what I have said with a pinch of salt. If that is what you are looking for, there are better sources for that content. I suggest starting with Adobe’s official blog, then the wikipedia.org articles, then follow their sources and read those — as well as random blog posts by internet weirdos like this one — they are always fun. What I am here to do is to talk about Flash as it relates to my experience as a game developer. The games that got me interested in it, and the games that got me out.

About me

A bit of background about me. I attended school up until the mid 2010’s, and grew up (when compared to my working class peers) relatively poor. I.e. no home internet and I had to play outside alot. Gasp! I’ll keep it vague because this is isn’t really about me, however the information is relevant. My first experience with the internet in general was in the mid 2000’s. Where it was largely a utilitarian space as far as I was concerned. This is because I never had the time to explore it at my leisure. My access was always in a public place. Ergo monitored, censored, and timed. Either at school in ICT (Information Computer Technology) class using the “Yahooligans” search engine (remember that?), or at the local library with that creepy moustachioed librarian woman breathing down my neck. Occasionally I could even be found at an internet cafe paying £1 for an hour of uncensored access whilst some indian dude watched what I did from his master console. Hi Mr Singh!

Consequently, my use of the internet was always objective or mission based. I went in not to explore and learn, but with an objective. Especially when I was paying for the privilege (and it is a privilege). I remember being at the internet cafe and booting up my collection of anime sites that allowed downloads (RIP cyber12.net) to get the latest 360p Horrible Subs copy of the Bleach or Naruto episodes, that just aired the previous week in Japan. I recall cramming as many of those ~50MB episodes (.RMVB video format), as well as Final Fantasy wallpapers, low quality D12 or Linkin park MP3s, and “kawaii ecchi” jpegs into a 256MB thumb-drive that I got second-hand from me mum. I would then take this bullshit home to my HP Pavilion A320N in order to fill it’s massive 50GB 3.5′ IDE hard drive, for my repeated media enjoyment. Life was simple back then.

So, what was I doing while the media files downloaded at a blazing 120KiB per second download speed? Well, I was playing Flash games of course! Keep up. I remember each episode download taking roughly half an hour, although I don’t know if the speed and file size numbers that I gave match up… probably not. The fact that I still (mostly) remember these numbers a decade later should tell you how much of an impression the experience left on my young supple mind.

Anyway, While the files downloaded, I’d hit up game sites. Websites, like: Newgrounds, Miniclip, Armourgames, and of course Kongregate. The usual suspects. What games did I play? Games like “Rebuild”, “Epic Battle Fantasy”, “Bubble Tank”, “Sonny”, “Armed with Wings”, or puzzle games like the “Escape the room” series; as well as the clones upon clones of GBA (GameBoy Advanced) tactical games like: “Final Fantasy tactics” and “Advance Wars”. Additionally I played excellent point-and-click titles such as the “Reincarnation” series, or Clickshake’s “Ballad of Reemus” series, or Zeebarf’s “A small Favour” series. These same quirky point-and-click puzzle-adventure games inspired me to create my own.

Why? Well because up until then I had only played MS-DOS point-and-click games. Like Sierra’s various “-Quest” series games. Think: “Police Quest”, “Space Quest”, “King’s Quest”, etcetera-quest. Additionally I played “Simon the Sorcerer” 1 and 2, “Sam & Max: Hit the Road”, and of course “Leisure suit Larry”. As an aside, I never played the famous Lucas Arts games like “Monkey Island” and “Day of the tentacle” until I was an adult. I am a victim ;_;.

Although the MS-DOS games sparked the love of the point-and-click adventure genre, their Flash counterparts brought the realisation home that I actually could do it too. They were fun, they had the heart of the older DOS games, but the technology was much more accessible, and they were significantly shorter. As far as I knew MS-DOS games were made using ritual possession via offerings to the Omnissiah. It was hard enough to install them and get them working properly, let alone make the bloody things. The learning curve involved was just far too sheer for my young self to effectively engage.

Whereas Flash used ActionScript3, and an IDE: named Adobe CS3. Heh, even rhymed. Sort of. ActionScript3 was as friendly as abstract programming languages come, and there were a plethora of online tutorials, both in the form of textual articles as well as youtube and dailymotion videos. This abundance of guides and media motivated me to “‘ave-arr-go!” as they say.

Developing Flash games

Developer resources for Flash media were abundant at the time. My favourite of which, is the open-source IDE (Integrated Development Environment) Flashdevelop! This was my IDE of choice because: 1) I couldn’t afford an official copy of Adobe Flash Professional (or Adobe Animate as they call it now), and 2) I didn’t trust the “unofficial” versions that I found. So FlashDevelop became my de facto IDE of choice and the open-source Flex SDK (Software Development Kit) as my .swf complier of choice. I should mention that this was also my first practical interaction with community driven open software, and it really opened my eyes to the liberation that it offered.

That is, not being tied to any particular company for one’s productivity tools. No cloud subscriptions, no periodic reminders to renew the lease, no “””anonymous””” telemetry phoning home… It’s actually a rather dangerous thing to get accustomed to, because it spoils a person when you realise how shit a lot of proprietary applications become with this bloat. I mean I understand why a lot of it is there, it’s just a shame that it punishes (inconveniences) the paying customer and not the pirate. Who’s most likely running a cracked version (in a virtual machine) without the bloatware. It’s all academic though, because I couldn’t afford it anyway. So it didn’t really matter what I thought about it at the time (or even now really).

The main disadvantage I recall Flashdevelop having when compared to Adobe Flash CS(-whatever), is that Adobe Flash had an animation timeline that it centrally used. This timeline was designed to have various Class objects attached to it at various points or frames. In AS3 an object class is used to encapsulate an entire source file in general. With regards to games: the online tutorials would attach a LoadingScreen.as3 Class to frame 0, then the MainMenu.as3 Class to frame 1, and so on. It does this without necessitating verbose instructions to the program as to how to handle them, as this was done automatically using the timeline. The idea is that when one frame terminated, an automatic call would be made to move to the next frame, and then run the attached classes and the methods contained within.

FlashDevelop on the other hand had far less visually accessible components (like the various art tools or timeline) when compared to the Adobe Flash IDE; and consequently it was less user friendly. At least initially. However once the learning curve was surmounted, it proved to be a very robust IDE. One thing that didn’t help young me, is the fact that the vast majority of the online tutorials were create specifically using the Adobe IDE; and unwittingly used features only present within that IDE — such as the aforementioned timeline.

For example: it took me quite a while to learn the correct techniques to create a functioning loading screen in FlashDevelop because of this. There was a lot of chopping up of other people’s code, then trail and error to get it working. And once it worked, to then to try to understand why in order to reliably replicate the method in future projects.

I remember managing source files in the FlashDevelop IDE being similar to the C++ IDEs I used at the time. (CodeBlocks, Eclipse, Bloodshed Dev-C++.) For example, having to first manually import other code files in order to make calls to their functions or add their objects to the stage. This is in contrast to the Adobe IDE which glossed over much of this type of stuff by using the timeline and a drag-and-drop interface. Where Users would drag an image onto the stage and then click on it to open a box where they’d add in any additional code. The Adobe IDE seemed like an interface designed for animators first and for most. That’s because it was. Whereas FlashDevelop was more of a programmers IDE. Where a lot of the animation tools of the Flash IDE were absent.

I realise that this is more a criticism of me rather than FlashDevelop, however Speaking of FlashDevelop and inconveniences. I do not miss the tedium of manually embedding large volumes of images with this IDE. Then casting them as “bitmaps”, to then place them into individual “movieclip” containers. All with mandatory unique names, by the way. This, in order to allow manipulation of the asset when it was finally added to the stage. At which point I had to manually assign their dimensions (width, height), alpha (transparency), and initial stage (x,y) location. To then use an external tweening library (forgot it’s name) for actually animating these images. Think sliding alpha gradients for flickering lasers, and gradual increase/decrease of an X position variable to make sliding doors open/close. Doing this for every image in a game got laborious quickly, and if I had to do it again I actually would have created helper functions to do it for me. Young me laboriously hard coded it all, and consequently learned good lessons.

Lessons on the difference between working hard and working smart. In this case, on making the time to create utilitarian recyclable functions. Ones that can take the various image’s stats as arguments and return what I wish. The strength of them is their recycle-ability, which leads to cleaner code and the avoidance of long lines of repeated hard-code. It would be instead just one line for each new image: citing the function call and the specific image’s stats as arguments for that function. But I guess a lesson learned the hard way is a lesson learned forever.

Developer migration

It seems like around 2011-2013 was the time when the Flash exodus really begun. Known creators begun to either drop off or change to other things. Around here in my opinion is also when the golden age of Newgrounds effectively ended. You can see from the example of popular Flash animators of the time (Zone and Tiarawhy), how their content either dropped off and/or changed significantly in a bid to reinvent themselves.

example of a creator’s flash media submission fall off

I remember when it came time to move on from the Adobe Flash player and browser games in general. I tried out Adobe Air because like many developers who started out with Flash games, I wished to pivot to creating games for other platforms. For me it was the desktop. I wanted to make “proper” point-and-click games, and hopefully earn a buck or two doing it.

Even back then the public perception of Flash games, was as a means of getting a developer’s feet wet in making games and little more. Sooner or later if the person is serious about making games — and serious about making a living, making games; then they have to move on to another platform. Many migrated to making mobile games for iOS and Android. Even in cases where the developer’s still wanted to make browser games, there were better (more lucrative and future-proof) technologies/paths to that.

I have seen many developers move on to using the emergent Unity technology during this transitory period. Unity allowed the creation of multiplatform programs whilst still using the same core codebase; just exporting it into different media formats appropriate to their target platforms. This includes desktops (Windows, Macintosh, and Linux), mobile platforms (Android, iOS), and in this case even web browsers by using the Unity Web Player. At the time I opted to stick with my current tooling, because the idea of being “set-back” by having to learn another system (Unity and C#) would stifle my ability to actually finish games. Its only with time that I have understood that a person never stops being the student. In order to progress optimally in any discipline one should not shy away from learning new things when the opportunity makes itself apparent. If I did keep up with making games, I would have had to retool anyway. Better to eat crow while it’s still young and tender.

Unity Web Player example game: https://www.kongregate.com/games/mythicowl/hexologic
(Notice how it still works post 2020 — assuming Kong is still online.)

I believe my first introductions to the Adobe Air technology was via two games in particular. Edmund McMillen’s original “The Binding of Isaac” and Jasper Byrne’s “Lone Survivor”. Although other devs followed a similar path like Amanita Design and their game “Machinarium”. But the prior two are the one’s I had hands on experience with. Anyway, once I purchased a copy their games, I found the core .swf files in their source folder and a native executable that called them. Along with a bunch of Adobe Air library files. I remember that the .swf files could also be played using a local version of Flash player; bypassing the native executable in the process. Meaning that Adobe Air itself, as I understand it just wraps the .swf file with an .exe.

So in an attempt to ape them, I downloaded the Air SDK and used it. It wasn’t hard to import Air into FlashDevelop and create a desktop app using it. I recall there being minimal code alterations in the process. I should state that I did so initially as an experiment, but I will do so again and host the .zip archives on this site once/if I can find my source files again. For posterity, and as a means of providing anyone interested with a playable copy of my early games.

Closing statements

That’s all I really have to say when it comes to Flash. I loved the games I played as a child. They introduced me to many new genres of games. From Zombie shooters, to puzzle games. Consequently I have a lot of good memories with it. I never really cared or noticed when websites stopped using Flash for banner adverts or for delivering videos (like youtube.com), as it really didn’t matter to me. I did however notice, when Unity web player, Java web player, and HTML5 games slowly started to become more and more prevalent; or when Flash animation died. And for good reason.

Flash is very limited. I remember the frustrating media file size embed limits. Well, you might say that I should’ve just used a loader class to load in external media as needed, but that had it’s own associated issues. Such as the program not finding the files (once online) that you uploaded with it. And some sites at the time would only allow one file upload (the .swf file) with no supporting media files. Where others had a size limit for the actual .swf file itself. I remember having to strip out a lot of embedded music (.mp3) out of my game “Last Life: The Blue Key” because kongregate had a .swf file size limit.

For many reasons like this, Flash got over taken by it’s competitors in it’s space. Then for other reasons (think Steam games sales, Apple iOS games, and plummeting ad revenue), the market for browser games in general fell off. Making it a space largely for new developers to cut their teeth and little else. Now, I don’t think it’s even that.

That makes me wonder; what will happen to all these browser game sites like Newgrounds and Kongregate. Although they host games using various technologies; their decade plus backlog of games is going to be in the Flash format. Their games catalogue is going to get gutted. And I don’t think that many (if any) developers are actually going to back to a game they made (probably as a kid) 10 years ago and remake it for an unprofitable platform. I know I am not. I have moved on from games in general, and even if I got back into it (which I want to), I’d work on a new property. I’d honestly be surprised if these sites still exist in a couple of years because of this.

Still it’s not all doom and gloom. Although Kongregate seems content to let their Flash content die on the vine, Newgrounds created a custom Flash player just for this reason. Although it seems like a bit of a patchwork or stopgap measure. It is much better than nothing.

Anyway, the good Flash games, the one’s people loved from this era. They’ll be preserved. The online copies can be stripped from the web browser (while they are there). Saved, and played locally using a desktop version of Flash player. And for the one’s people didn’t care to preserve. Well, like so much else in life: they get lost to the sands of time.

Thank you for reading.

One more thing…

This made me laff, but mostly because Flash is Ded. RIP.

References, links, further reading

https://web.archive.org/web/20171202123704/https://theblog.adobe.com/adobe-flash-update/
https://www.cnet.com/products/hp-pavilion-a320n-athlon-xp-2800-plus-2-08-ghz-monitor-none-series/
https://www.flashdevelop.org/
https://blog.adobe.com/en/2019/05/30/the-future-of-adobe-air.html#gs.na44sx
https://helpx.adobe.com/security/products/flash-player.html
https://en.wikipedia.org/wiki/Adobe_Flash
https://en.wikipedia.org/wiki/Adobe_AIR
https://en.wikipedia.org/wiki/FlashDevelop
https://en.wikipedia.org/wiki/Apache_Flex
https://en.wikipedia.org/wiki/MS-DOS
https://en.wikipedia.org/wiki/Open-source_software
https://en.wikipedia.org/wiki/RMVB
https://en.wikipedia.org/wiki/Edmund_McMillen
https://en.wikipedia.org/wiki/Comparison_of_HTML5_and_Flash
https://en.wikipedia.org/wiki/HTML5
intro to tweeining:
https://www.peachpit.com/articles/article.aspx?p=20965
example of Adobe IDE specific tutorials:
https://helpx.adobe.com/animate/using/shape-tweening.html
“Thoughts on Flash” by Steve Jobs
Primary Source: https://www.apple.com/hotnews/thoughts-on-flash/
Secondary Source: https://appleinsider.com/articles/10/04/29/apples_steve_jobs_publishes_public_thoughts_on_flash_letter
Secondary Source: https://medium.com/riow/thoughts-on-flash-1d1c8588fe07
https://en.wikipedia.org/wiki/Thoughts_on_Flash
https://en.wikipedia.org/wiki/HTML5#%22Thoughts_on_Flash%22

#0011: Copy-paste of “Thoughts on Flash” by Steve Jobs (2010)

#0011: Copy-paste of “Thoughts on Flash” by Steve Jobs (2010)

Apple logo with Adobe Flash logo on top of it

Preamble

This is a transcription of an open letter written in 2010 on the subject of Adobe Flash by the CEO of Apple of the time: Steve Jobs. I paste it here for referential and posterity purposes.

Rather annoyingly Apple has since removed the original article from their official company website, so this is a copy from a secondary source. An archive website called: web.archive.org (A.K.A wayback machine). This was then verified by comparing it with copies from other website articles published around 2010. Check the links and references section for specifics. I think this is a good illustration of the fragility of information on the internet. When primary (controlled) sources can suddenly disappear; we have to rely on secondary sources and their general trustworthiness.

(START QUOTE)

Thoughts on Flash

Apple has a long relationship with Adobe. In fact, we met Adobe’s founders when they were in their proverbial garage. Apple was their first big customer, adopting their Postscript language for our new Laserwriter printer. Apple invested in Adobe and owned around 20% of the company for many years. The two companies worked closely together to pioneer desktop publishing and there were many good times. Since that golden era, the companies have grown apart. Apple went through its near death experience, and Adobe was drawn to the corporate market with their Acrobat products. Today the two companies still work together to serve their joint creative customers – Mac users buy around half of Adobe’s Creative Suite products – but beyond that there are few joint interests.

I wanted to jot down some of our thoughts on Adobe’s Flash products so that customers and critics may better understand why we do not allow Flash on iPhones, iPods and iPads. Adobe has characterized our decision as being primarily business driven – they say we want to protect our App Store – but in reality it is based on technology issues. Adobe claims that we are a closed system, and that Flash is open, but in fact the opposite is true. Let me explain.

First, there’s “Open”.

Adobe’s Flash products are 100% proprietary. They are only available from Adobe, and Adobe has sole authority as to their future enhancement, pricing, etc. While Adobe’s Flash products are widely available, this does not mean they are open, since they are controlled entirely by Adobe and available only from Adobe. By almost any definition, Flash is a closed system.

Apple has many proprietary products too. Though the operating system for the iPhone, iPod and iPad is proprietary, we strongly believe that all standards pertaining to the web should be open. Rather than use Flash, Apple has adopted HTML5, CSS and JavaScript – all open standards. Apple’s mobile devices all ship with high performance, low power implementations of these open standards. HTML5, the new web standard that has been adopted by Apple, Google and many others, lets web developers create advanced graphics, typography, animations and transitions without relying on third party browser plug-ins (like Flash). HTML5 is completely open and controlled by a standards committee, of which Apple is a member.

Apple even creates open standards for the web. For example, Apple began with a small open source project and created WebKit, a complete open-source HTML5 rendering engine that is the heart of the Safari web browser used in all our products. WebKit has been widely adopted. Google uses it for Android’s browser, Palm uses it, Nokia uses it, and RIM (Blackberry) has announced they will use it too. Almost every smartphone web browser other than Microsoft’s uses WebKit. By making its WebKit technology open, Apple has set the standard for mobile web browsers.

Second, there’s the “full web”.

Adobe has repeatedly said that Apple mobile devices cannot access “the full web” because 75% of video on the web is in Flash. What they don’t say is that almost all this video is also available in a more modern format, H.264, and viewable on iPhones, iPods and iPads. YouTube, with an estimated 40% of the web’s video, shines in an app bundled on all Apple mobile devices, with the iPad offering perhaps the best YouTube discovery and viewing experience ever. Add to this video from Vimeo, Netflix, Facebook, ABC, CBS, CNN, MSNBC, Fox News, ESPN, NPR, Time, The New York Times, The Wall Street Journal, Sports Illustrated, People, National Geographic, and many, many others. iPhone, iPod and iPad users aren’t missing much video.

Another Adobe claim is that Apple devices cannot play Flash games. This is true. Fortunately, there are over 50,000 games and entertainment titles on the App Store, and many of them are free. There are more games and entertainment titles available for iPhone, iPod and iPad than for any other platform in the world.

Third, there’s reliability, security and performance.

Symantec recently highlighted Flash for having one of the worst security records in 2009. We also know first hand that Flash is the number one reason Macs crash. We have been working with Adobe to fix these problems, but they have persisted for several years now. We don’t want to reduce the reliability and security of our iPhones, iPods and iPads by adding Flash.

In addition, Flash has not performed well on mobile devices. We have routinely asked Adobe to show us Flash performing well on a mobile device, any mobile device, for a few years now. We have never seen it. Adobe publicly said that Flash would ship on a smartphone in early 2009, then the second half of 2009, then the first half of 2010, and now they say the second half of 2010. We think it will eventually ship, but we’re glad we didn’t hold our breath. Who knows how it will perform?

Fourth, there’s battery life.

To achieve long battery life when playing video, mobile devices must decode the video in hardware; decoding it in software uses too much power. Many of the chips used in modern mobile devices contain a decoder called H.264 – an industry standard that is used in every Blu-ray DVD player and has been adopted by Apple, Google (YouTube), Vimeo, Netflix and many other companies.

Although Flash has recently added support for H.264, the video on almost all Flash websites currently requires an older generation decoder that is not implemented in mobile chips and must be run in software. The difference is striking: on an iPhone, for example, H.264 videos play for up to 10 hours, while videos decoded in software play for less than 5 hours before the battery is fully drained.

When websites re-encode their videos using H.264, they can offer them without using Flash at all. They play perfectly in browsers like Apple’s Safari and Google’s Chrome without any plugins whatsoever, and look great on iPhones, iPods and iPads.

Fifth, there’s Touch.

Flash was designed for PCs using mice, not for touch screens using fingers. For example, many Flash websites rely on “rollovers”, which pop up menus or other elements when the mouse arrow hovers over a specific spot. Apple’s revolutionary multi-touch interface doesn’t use a mouse, and there is no concept of a rollover. Most Flash websites will need to be rewritten to support touch-based devices. If developers need to rewrite their Flash websites, why not use modern technologies like HTML5, CSS and JavaScript?

Even if iPhones, iPods and iPads ran Flash, it would not solve the problem that most Flash websites need to be rewritten to support touch-based devices.

Sixth, the most important reason.

Besides the fact that Flash is closed and proprietary, has major technical drawbacks, and doesn’t support touch based devices, there is an even more important reason we do not allow Flash on iPhones, iPods and iPads. We have discussed the downsides of using Flash to play video and interactive content from websites, but Adobe also wants developers to adopt Flash to create apps that run on our mobile devices.

We know from painful experience that letting a third party layer of software come between the platform and the developer ultimately results in sub-standard apps and hinders the enhancement and progress of the platform. If developers grow dependent on third party development libraries and tools, they can only take advantage of platform enhancements if and when the third party chooses to adopt the new features. We cannot be at the mercy of a third party deciding if and when they will make our enhancements available to our developers.

This becomes even worse if the third party is supplying a cross platform development tool. The third party may not adopt enhancements from one platform unless they are available on all of their supported platforms. Hence developers only have access to the lowest common denominator set of features. Again, we cannot accept an outcome where developers are blocked from using our innovations and enhancements because they are not available on our competitor’s platforms.

Flash is a cross platform development tool. It is not Adobe’s goal to help developers write the best iPhone, iPod and iPad apps. It is their goal to help developers write cross platform apps. And Adobe has been painfully slow to adopt enhancements to Apple’s platforms. For example, although Mac OS X has been shipping for almost 10 years now, Adobe just adopted it fully (Cocoa) two weeks ago when they shipped CS5. Adobe was the last major third party developer to fully adopt Mac OS X.

Our motivation is simple – we want to provide the most advanced and innovative platform to our developers, and we want them to stand directly on the shoulders of this platform and create the best apps the world has ever seen. We want to continually enhance the platform so developers can create even more amazing, powerful, fun and useful applications. Everyone wins – we sell more devices because we have the best apps, developers reach a wider and wider audience and customer base, and users are continually delighted by the best and broadest selection of apps on any platform.

Conclusions.

Flash was created during the PC era – for PCs and mice. Flash is a successful business for Adobe, and we can understand why they want to push it beyond PCs. But the mobile era is about low power devices, touch interfaces and open web standards – all areas where Flash falls short.

The avalanche of media outlets offering their content for Apple’s mobile devices demonstrates that Flash is no longer necessary to watch video or consume any kind of web content. And the 200,000 apps on Apple’s App Store proves that Flash isn’t necessary for tens of thousands of developers to create graphically rich applications, including games.

New open standards created in the mobile era, such as HTML5, will win on mobile devices (and PCs too). Perhaps Adobe should focus more on creating great HTML5 tools for the future, and less on criticizing Apple for leaving the past behind.

Steve Jobs
April, 2010

(END QUOTE)

References, links, further reading

Primary Source:

  • [REMOVED] https://www.apple.com/hotnews/thoughts-on-flash/

Secondary Sources:

  • https://appleinsider.com/articles/10/04/29/apples_steve_jobs_publishes_public_thoughts_on_flash_letter
  • https://medium.com/riow/thoughts-on-flash-1d1c8588fe07
  • https://web.archive.org/web/20100703090358/https://www.apple.com/hotnews/thoughts-on-flash/

Steve Jobs at the 2010 D8 Conference video (extract on flash)

hyperlink: https://www.youtube.com/watch?v=YPb9eRNyIrQ

Steve Jobs at the 2010 D8 Conference (full conference)

hyperlink: https://www.youtube.com/watch?v=a0AZLPqjpkg

  • https://en.wikipedia.org/wiki/Thoughts_on_Flash
  • https://en.wikipedia.org/wiki/HTML5#%22Thoughts_on_Flash%22