Category Archives: pabulum

Pabulum– Stuff to Think About!

Raid Recovery

I have a broken Western Digital MyBookLive with two 2TB drives on-board. One appeared to have gone Bad which uncharacteristically took the whole ‘network serving’ concept off line

I took out and marked both 2TB drives as A and B
(do it! or you will wish you had…)
then tested them both on my Linux MINT 20 Ulua (AKA Ubuntu 20) workstation. BE SURE yours has NO raid devices on it EXCEPT what we are doinf here : )

You can ‘hot swap’ drives it seems and I have been doing so over a SATA cable that hangs out the Server box. 
I am not responsible for having said that; It just did not break
MY workstation

None of the following works over USB-
USB does not send all the drive stats needed, it seems..

“A” showed S.M.A.R.T. Error
by using gsmartcontrol which confirmed multiple bad sectors waiting to be relocated- which Was Not Happening : )
“B” seemed Okay- But I homed in on “A”. Just Cos
Note I only left “A” connected, and NOT “B”.

OK so, all as ‘root’
I ran:
to identify what block/storage Devices the system could still ‘see’

sda 8:0 0 596.2G 0 disk
├─sda1 8:1 0 100M 0 part
└─sda2 8:2 0 596.1G 0 part /mnt/d2win
sdb 8:16 0 119.2G 0 disk
├─sdb1 8:17 0 71.7G 0 part /mnt/win
├─sdb2 8:18 0 547M 0 part
├─sdb3 8:19 0 7.8G 0 part [SWAP]
└─sdb4 8:20 0 39.1G 0 part /
sdc 8:32 0 931.5G 0 disk
└─sdc1 8:33 0 931.5G 0 part /mnt/1TBDATA
sdh 8:112 0 1.8T 0 disk
├─sdh1 8:113 0 1.9G 0 part
├─sdh2 8:114 0 1.9G 0 part
├─sdh3 8:115 0 489M 0 part
└─sdh4 8:116 0 1.8T 0 part

There at 1.2TB was the drive with 4 partitions of /dev/sdh…
the largest, /dev/sdh4 likely holds the mass of Data I am after

mount /dev/sdh4 /mnt/tmp
gives:”  unknown filesystem type ‘linux_raid_member”
Reading up some it looks like we need the “mdadm” (Multi Disk Admin Tools?) command line utilities and the use of this command:
mdadm --assemble --scan
This it seems will find anythign that looks liek RAid partiotns and mount them as such under /dev/md*

ls /dev/md*
gave me:
/dev/md126 /dev/md127


root@SSD128:/mnt# mount /dev/md126 /mnt/tmp
mount: /mnt/tmp: can’t read superblock on /dev/md126.
root@SSD128:/mnt# mount /dev/md127 /mnt/tmp
mount: /mnt/tmp: can’t read superblock on /dev/md127.
root@SSD128:/mnt# mdadm –verbose –assemble –force /dev/md127 /dev/sdh4
mdadm: looking for devices for /dev/md127
mdadm: /dev/sdh4 is busy – skipping
root@SSD128:/mnt# mdadm –stop /dev/md127
mdadm: stopped /dev/md127
root@SSD128:/mnt# mdadm –verbose –assemble –force /dev/md127 /dev/sdh4
mdadm: looking for devices for /dev/md127
mdadm: /dev/sdh4 is identified as a member of /dev/md127, slot 0.
mdadm: no uptodate device for slot 1 of /dev/md127
mdadm: added /dev/sdh4 to /dev/md127 as 0
mdadm: /dev/md127 has been started with 1 drive (out of 2).

…. To Be Completed …





Thunderbird and Google Apps Address Book Sync

This assumes you use a Email address or Google Apps or Google GSuite (can be Free for Non Profits)

That you use free THUNDERBIRD Email already set up using IMAP but realize your Contacts are not part of the clever syncing your Mail Folders use (Warning: Will NOT work with anything but Google!- No Yahoo!, Earthlink, etc!!)

So. your online Contacts will not appear within Thunderbird’s own Address Book,
or at least not be In Sync…
You CAN see them at:
But they are NOT the same in your Tunderbird Contacts.
Well, Thunderbird will collect the addresses of people you write or reply to, but this will NOT be the same set as the web based Contacts. SO! :

-0) If you ever Replied or Wrote to someone within Thunderbird, most likely they will end up in the address book called
“Collected Addresses”.
Start typing some part of the name on the To:, CC: or BCC: Line,
and the whole address collected should come up…

0) In Thunderbird,  Search for, and reply to whoever it is you need to contact:
use: View.. Toolbars.. Quick Filter toolbar to be certain it is ON
(It’s incredibly useful anyway)
– Search for the Name of the person as Sender that you want to write to
– assuming they wrote you at least once, they will be there and you can use REPLY!

No? OK!

1) Just use website anyway : )

2) Best of Both Worlds:
Use Thunderbird alongside:
to get the best of BOTH worlds- Copy & Paste from that list into the TO: line, etc…

3) I use the GContactSync add on Application to pull contacts
FROM GOOGLE into THUNDERBIRD and merge them up.
You MAY and up with duplicates this way so an additional plugin is then used to remove the duplicates. A bit messy but ideal when complete. DO NOT install it from there– that’s just for Info.
In Thunderbird, click
Tools… Addons… Extensions..
And search for:

Then permit its use on your Google account
Be Vewwy Vewwy CAreful where you Go from here as you are likely to end up with Duplicates (for which there is a separate Fixer Extension).
Best is to look carefully at the GContact Sync settings on your Toolbar (after you have Re Started) and see offerings for which directions the sync is to go: BOTH ways is probably NOT a good idea at first- Set it to copy FROM Google TO Thunderbird first?

you have Backups, right?

Within Thunderbird, click Tools > Address Book.
Select the desired Address Book (s).
Note: Make sure that you are selecting a specific address book.
The selection by default is set as “All Address Books”
and exporting this way will result in a blank file.
Select Tools > Export…
Note: If you do not see the Export option, click View > Toolbars to turn this option on.

Select “LDIF” (Industry Standard that will allow you to re import)
from the Format drop-down box.
Choose where you want to save the exported file, give the file a name, and click Save.

Comments Welcomed!

Set Video File Name to Date Made

Using: Linux, exiftool, mediainfo
Requires- Intermediate LINUX skills and Google.
Further Reference:
You MAY want to clean up file naming with DETOX

I have lots of small MP4 video files with arbitrary sequence numbers like:
dji_001.mp4, dji_002.mp4.. etc etc- making it hard to divide them into folders later for particular filmed events with maybe a dozen files with similar time stamps… The file dates themselves are unreliable as it may be much later and the date of the copy not the filming.
These video files contain realistic  internal metadata of all sorts tagged when the video was made that’s VERY interesting including GPS info, altitude and way more:
Hey, try it yourself!
for any movie
Note: that EXIF Timestamps are very reasonably in UTC so consider that when you wonder why your videos are tagged hours off–
Unless you are near 0 longitude (Western Europe..) ; )
My exif data DOES store lat/long but that’s another Project : )

To batch rename an entire folder of arbitrarily named files and recursively, everything below::
(No Line Break, BTW!)

exiftool “‘-filename<CreateDate” -d %Y_%b_%d_%a_@_%I:%M_%p%%-c.%%le -r -ext mp4 *

(I fear Spaces so tend to use the Underbar _ character)

exiftool “-filename<CreateDate” -d %Y-%b-%d-%a_@_%I:%M:%S_%p%%-c.%%le -r -ext mp4 *

(Note: If meant for Windows these filenames contain things like Colon “:” which it May NOT like)

exiftool “-filename<CreateDate” -d %Y_%b_%d_%a_%I.%M.%S_%p%%-c.%%le -r -ext mp4 *
gives: 2018_Aug_13_Mon_07.44.08_PM.mp4

the DATE/TIME variables can be studied by using:
man date
man exiftool
will remind you the “-r” option is to be Recursive, which you may not want… and more.

exiftool can also work on still images. Very useful!

Linux RAID

  • This guide is a “nutshell” (Brief guides) script that assumes a reasonable level of Linux proficiency & understanding & is not geared to a particular Linux or specific instructions
    Lots of Googleable Entries such as green COMMANDS should help : )_
  • I am assuming an already running LINUX system of recent origin (I use MINT, an Ubuntu/Debian derivative) on a drive all its own, NOT RAID, possibly a small SSD of 64GB and the addition of 2 EMPTY identical drives to be put in RAID1 formation (Mirrored, that is 2 identical drives ‘combined’ redundantly into one for DATA use)
    NOT for Operating System Boot use in this guide.
  • Do not use BIOS RAID or  Hardware Raid this is all Software Raid done by LINUX
  • TEST ALL THE DRIVES that will be used including any Operating System drives using their S.M.A.R.T. facility. IE: GSMARTCONTROL GUI
  • If the drives are over 2TB in size (and perhaps even if they are not) they must be configured, partitioned and formatted using GPT not old style MBR (Master Boot Record) This works even on old PC architecture without EFI else you will not see outside the 2TB boundary : )
  • Using “PARTED” utility (Do not use FDISK) 
  • “label” the disk “gpt” as per instructions. Create conventional ext4 partition on EACH drive using the whole partition ideally,
  • Check whether you have RAID utility: “MDADM” and if not, get it.
  • Check What You Have Got: lsblk AKA: “LiStBLocK”
  • Check whether something raid like is around yet (Not as silly as it sounds while we are experiment:)
    cat /proc/mdstat
  • Know the Device Names then use this command:
    $sudo mdadm –create –verbose /dev/md0 –level=1 –raid-devices=2 /dev/sda /dev/sdb
  • above assumes devices were /dev/sda, /dev/sdb. But you knew that..
  • on another console, run : cat /proc/mdstat
  • This will show you the ongoing Mirroring Process-
  • a new ‘device’ will now exist: /dev/md0 as specified above. You can mount this like so on an existing Mount Point:
  • mount /dev/md0 /mnt/raid
  • I use no Options as things seem to get Auto detected nicely. 
  • TO make it auto mount add entry to /etc/fstab- Use “blkid” to find its UUID which is the correct way to Mount stuff in Linux
  • YOu can use the device IMMEDIATELY after issuing the last “mdadm” command above and mounting the array- It can simultaneously be written to while mirroring but the drives will be VERY VERY BUSY and in my case, overheated while doing so!
  • I use the “HDDTEMP” utility to check drive temperatures:
    Example: sudo hddtemp /dev/sd[a-b]  
  • Use advanced  features of SMARTMONTOOLS to Email or Notify you if SMART monitoring notices drive degradadtion,
    Not Covered Here
  • Use “NETHOG” to watch how the server is being used by the Network
  • To Be Continued. Enjoy!

Clean Up File Names

Cleaning up Funny File Names
Keep your Original files somewhere SAFE as a source to restart the project should it hiccup : )
Utilities Used: Google them for your Platform…
Examples are for Image Files- Suit Yourself here though.
Red is Commands- Green is My Results.

detox, exiftool, imagemagick, convmv, 

I copied a lot of cranky old 20 year old Floppy Disk image files into a Linux folder to clean up with the intention they should end up inside Apple Photos which would use their proper Image Timestamp to good effect : )

$ls -l
-rwxr-xr-x 1 sysop sysop 59993 Mar 29 2003 <A9>2002 12 19 Sunrise -5<B0> (21).jpg
-rwxr-xr-x 1 sysop sysop 78345 Mar 29 2003 <A9>2002 12 19 Sunrise -5<B0> (79).jpg
-rwxr-xr-x 1 sysop sysop 55210 Mar 29 2003 <A9>2002 12 31 Silvester (1).jpg
-rw-r--r-- 1 sysop sysop 55302 Mar 29 2003 ©2002 12 31 Silvester (2).jpg
-rwxr-xr-x 1 sysop sysop 190714 Feb 15 2003 20%20Mutterstuten%20mit%20Fohlen.jpg

Be Nice! Lets Set ’em all to reasonable Permissions:
$chmod -Rvc 644 *

This untangles ‘funny’ characters and irregularities:
$detox -r -v *

These three unify foreign language characters to standard UTF-8
(Note the final "." period meaning "Here")
$convmv -r -f windows-1252 -t UTF-8 .
$convmv -r -f ISO-8859-1 -t UTF-8 .
$convmv -r -f cp-850 -t UTF-8 .

Clean Em Up!! Lowercase names:
$for file in $(ls); do mv -i ${file} ${file,,}; done
Replace spaces in file names with underbar:
$rename ‘s/\s/_/g’ ./*.jpg

None of this so far changes the original Time Stamp on the file-
Its Creation date, only its access point: Its Name.
Which is Good. Could be useful.
Older digital Pictures did not use the EXIF metadata
that records when the Picture was taken, etc, so, this is all we have got: the File Creation Date listed by “ls -l “.

I wanted to standardize on .JPG files, as there were a mix  of GIF, BMP, etc etc.. Your choice however.
ImageMagick's MOGRIFY is good for that; Here, making all gifs into jpgs.

$mogrify -format jpg *.gif

*note* I found some animated GIFs and the result was an array of single .jpg's
as jpg does NOT have the ability to Animate! Ugh!
IE: This single file has 6 images within it. using ImageMagicks' "identify" utility.. Just a Warning...
$identify WdfAnimate.gif
WdfAnimate.gif[0] GIF 275x440 275x440+0+0 8-bit sRGB 256c 98.7KB 0.000u 0:00.000
WdfAnimate.gif[1] GIF 275x440 275x440+0+0 8-bit sRGB 256c 98.7KB 0.000u 0:00.000
WdfAnimate.gif[2] GIF 275x440 275x440+0+0 8-bit sRGB 256c 98.7KB 0.000u 0:00.000
WdfAnimate.gif[3] GIF 275x440 275x440+0+0 8-bit sRGB 256c 98.7KB 0.000u 0:00.000
WdfAnimate.gif[4] GIF 275x440 275x440+0+0 8-bit sRGB 256c 98.7KB 0.000u 0:00.000
WdfAnimate.gif[5] GIF 275x440 275x440+0+0 8-bit sRGB 256c 98.7KB 0.000u 0:00.000

The resulting new JPG output files have today’s timestamp,
not that of the original GIF, So! ::

$for i in *.gif; do touch -r "$i" "${i%.*}.jpg"; done

This ‘touches’; (Sets the Timestamp) as the SAME as a Reference file– the Original.
Now Let’s mess with the file “witch.gif” for Testing; Then apply to All..

$jhead -exifmap witch.jpg
File name : witch.jpg
File size : 67559 bytes
File date : 2000:02:16 08:35:32
Resolution : 398 x 300
JPEG Quality : 92

No EXIF data Present. Let’s create it with the current File Date:
$jhead -mkexif witch.jpg
Modified: witch.jpg

Now Look: Additional MAP file EXIF records the EXIF
Timestamp as if when Picture was Taken
regardless of what happens to the file’s timestamp from here on out:

$jhead -exifmap witch.jpg
Map: 00008-00038: Directory
Map: 00038-00058: Data for tag 0132
Map: 00058-00076: Directory
Map: 00076-00096: Data for tag 9003
Map: 00096-00126: Directory
Map: 00126-00126: Thumbnail
Map: 00126- End of exif
Map: 00000 49 49 2a 00 08 00 00 00 02 00
.. thumbnail data, I think?? ...
Map: 00120 00 00 00 00 00 00 00 00 11 04
File name : witch.jpg
File size : 67677 bytes
File date : 2000:02:16 08:35:32
Date/Time : 2000:02:16 08:35:32
Resolution : 398 x 300
JPEG Quality : 92

Use the Manual Pages for these Utilities here for much more useful stuff : )


Linux Server Breakin Attempts

Heads Up as I am notified my Virtual Linux Server logs have suddenly starting growing much faster than usual.
Also I got a warning that Virtual Memory was Low.

This is the image after things got fixed:
Looking through logs I see torrents of failed login attempts over the SSH (Secure Shell) and FTP port (yes I am trying hard to switch to SFTP but that’s another story) at the rate of 5 per second or more at times.
Several Issues to Note:
 I moved SSHD from default port 22 to 1066 years ago. 
That was not, I thought a ‘well known port’ unless of course someone figures it out.
Had not changed it since. 
– Server auto updates itself regularly and I scan and check it manually now & then.
and there does not appear to be a crack so much as brute force attacks perhaps combined with guesswork.

  • Hackers obviously scanned & found the (years old) ‘new’ port.1066. I since moved it again.
    – Hackers then launched a barrage of brute force attempts with various names and who knows what password on that particular port. (logins fails restricted to 3 per 600s session in /ec/ssh/sshd_config)
    – Interestingly, ‘root’ was never tried (It’s disabled anyway)
    I assume as this could trigger a default alert- But: admin, demo, test etc? Of course.
    – These attacks came from unique IP addresses all over the word. Yes, folks, mainly China and Asia. Russia did not show up per se but then why would it? : )
    Few came from the same source IP or even subnet more than once. RESPONSES:
    – I tried to Ban China in Iptables. Not so easy as it sounds and a poor solution anyway being a majority of the sources, but not all.
    Overfilling Iptables uses up kernel memory and exhausts Virtual Memory : (
    – I setup “fail2ban”, which examines pre determined  log files for fails and acts upon it to ‘ban’ the source using Iptables again.
    which is useless as each attempt was from a new IP. Oh Yes! From literally YEARS ago I suddenly recalled /etc/hosts.allow & /etc/hosts.deny which act on the initial service port connection and CAN check wildcard hostnames by name AND IP.
    So now my rules are: Deny from anywhere EXCEPT couple of my local ISPs. No-one gets in now, regardless, unless their reverse IP name matches ISPs in my area.
  • a good solution would be light on server resources lest the result be a Denial Of Service attack overwhelming the system with blocking rules. 
  • Judging by what’s happening recently  I fear a “Grey Goo Meltdown” of the Internet- I assume MOST of these attacking hosts have themselves been broken into and turned into ‘zombie bots’ attempting to propagate themselves. The ultimate purpose is to obtain a concerted powerful platform running software of the primary attacker’s choosing to launch denial of service attacks on target domains
    These services are For Hire on the Dark Web.Here is a sample log at the end of this post ,
    and I am thankful my slackadaisical  inattention was not more severely punished by the blackhats of the Internet.
    I used to use hosts.allow/deny on EVERYTHING with only minor inconvenience.Security is Interesting & entertaining much like a firework display until you get blasted… : )

Mar 1 03:44:54 s19410066 sshd[5235]: Failed password for invalid user aion from port 55082 ssh2
Mar 1 03:44:54 s19410066 sshd[5237]: Received disconnect from 11: Bye Bye
Mar 1 03:44:57 s19410066 sshd[5348]: Invalid user odoo from
Mar 1 03:44:57 s19410066 sshd[5349]: input_userauth_request: invalid user odoo
Mar 1 03:44:57 s19410066 sshd[5348]: pam_unix(sshd:auth): check pass; user unknown
Mar 1 03:44:57 s19410066 sshd[5348]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=
Mar 1 03:44:59 s19410066 sshd[5348]: Failed password for invalid user odoo from port 34944 ssh2
Mar 1 03:44:59 s19410066 sshd[5349]: Received disconnect from 11: Bye Bye
Mar 1 03:45:01 s19410066 sshd[5350]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost= user=root
Mar 1 03:45:02 s19410066 proftpd: pam_unix(proftpd:session): session opened for user willowsweather by (uid=0)
Mar 1 03:45:04 s19410066 sshd[5350]: Failed password for root from port 58772 ssh2
Mar 1 03:45:04 s19410066 sshd[5352]: Received disconnect from 11: Bye Bye
Mar 1 03:45:17 s19410066 sshd[5574]: Invalid user jose from
Mar 1 03:45:17 s19410066 sshd[5575]: input_userauth_request: invalid user jose
Mar 1 03:45:17 s19410066 sshd[5574]: pam_unix(sshd:auth): check pass; user unknown
Mar 1 03:45:17 s19410066 sshd[5574]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=
Mar 1 03:45:19 s19410066 sshd[5574]: Failed password for invalid user jose from port 58284 ssh2
Mar 1 03:45:19 s19410066 sshd[5575]: Received disconnect from 11: Bye Bye
Mar 1 03:45:20 s19410066 sshd[5576]: Invalid user admin from
Mar 1 03:45:20 s19410066 sshd[5577]: input_userauth_request: invalid user admin
Mar 1 03:45:20 s19410066 sshd[5576]: pam_unix(sshd:auth): check pass; user unknown
Mar 1 03:45:20 s19410066 sshd[5576]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=
Mar 1 03:45:22 s19410066 sshd[5576]: Failed password for invalid user admin from port 59480 ssh2
Mar 1 03:45:22 s19410066 sshd[5577]: Received disconnect from 11: Bye Bye
Mar 1 03:45:44 s19410066 sshd[5584]: reverse mapping checking getaddrinfo for [] failed – POSSIBLE BREAK-IN ATTEMPT!
Mar 1 03:45:44 s19410066 sshd[5584]: Invalid user time from
Mar 1 03:45:44 s19410066 sshd[5585]: input_userauth_request: invalid user time
Mar 1 03:45:44 s19410066 sshd[5584]: pam_unix(sshd:auth): check pass; user unknown
Mar 1 03:45:44 s19410066 sshd[5584]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=
Mar 1 03:45:46 s19410066 proftpd: pam_unix(proftpd:session): session opened for user artol by (uid=0)
Mar 1 03:45:47 s19410066 sshd[5584]: Failed password for invalid user time from port 55849 ssh2
Mar 1 03:45:47 s19410066 sshd[5585]: Received disconnect from 11: Bye Bye
Mar 1 03:45:58 s19410066 sshd[5589]: Invalid user demo from
Mar 1 03:45:58 s19410066 sshd[5590]: input_userauth_request: invalid user demo

Life n death

Happy New Year, I think!

Odds Of Death In The United States By Selected Cause Of Injury, 2017 (1)

I copied this data from the social security administration site for use in the Nevada county Altar Show– a commemorative exhibition for over 100 ‘artists’ to present what they thought was important.
i thought this was important as it commemorated a life as yet not over but Sure to Happen: But how?
Which way would you choose if you could?
“Natural Disasters?” “All Other CAuses”?
Just Saying,  as they say right here in California!

Cause of death

Number of
deaths, 2017

One-year odds

Lifetime odds

Accidental poisoning by and exposure to
noxious substances




     Drug poisoning




     Opioids (including both legal and illegal)




All motor vehicle accidents




     Car occupants




     Motorcycle riders








Assault by firearm




Exposure to smoke, fire and flames




Fall on and from stairs and steps




Drowning and submersion while in or
falling into swimming pool




Fall on and from ladder or scaffolding




Air and space transport accidents




Firearms discharge (accidental)




Cataclysmic storm (3)












Earthquake and other earth movements




Bitten or struck by dog




(1) Based on fatalities and life expectancy in 2017. Ranked by deaths in 2017.
(2) Includes all types of medications including narcotics and hallucinogens, alcohol and gases.
(3) Includes hurricanes, tornadoes, blizzards, dust storms and other cataclysmic storms.

Source: National Center for Health Statistics; National Safety Council.

Facebook Ransom

[Draft: Money Matters, Aug 01 2019

Locked Out Of Facebook? Can I please reach a Human about this?
No. Here is Why, based on the ever valuable & interesting Statistics
Here, I am being conservative:
As of June 2019, there are 2.41 billion ‘active users’ (Define that though : ). 83 million of which are thought to be fake (but how?)
which though less than .03% has a disproportionate effect.
That’s also Instagram, WhatsApp and Messenger which Facebook operates.

I am using the USA definition of Billion (1000 million not the UK a million million) AND This Calculator to help me.
1.6 billion of whom log on average once per day.
I’ll use the larger Active User figure, just cos I can.

Facebook has about 40,000 employees worldwide which means in its crudest form, one employee per 6000 ‘active accounts’ which log in an average of 20 minutes a day.

Facebook is “Free” for you and me. If Facebook is not a commodity you pay for, then, as they say, you ARE the commodity. You will be engaged and monetized with ads, etc, but no one can afford the slim margins spread over a vast array of users to talk to you personally even at third world rates of perhaps a dollar an hour for Facebook Tech Support.

So, much of the Security about fake accounts, hack attempts etc are managed by AI (Artificial Intelligence) algorithms that are constantly being tweaked to look for unusual behaviors or reports.

Machine Learning (Big at Google!) is a different beast and ‘learns’ from Past Experience of hacks… and its own simulations.
Such a strategy has had the oriental game “Go!” beat the world champion written by programmers who do not know how to even play “Go!”!

What to Humans do that machines have trouble doing?
If you have been Locked Out of your account you will find out.
You are liekly to persist through a number of means:
“Use your registered cell phone to get a code and verify it’s you”
Well– that avenue has been cracked by script hackers with bogus Texting Accounts online.
So the Text never gets delivered. So you try a few more times.
Then FB wants you to uplaod drvier’s license, etc, all of which can of course be done by machiens and faked… I could go on…
EVENTUALLY the algorithm may then pass on your dilemma in a link to an associate in the Phillipines who may review the site according to strict guidelines with FAcebook>>>

But, how did oyu get locked out in the first place?

  1. By an algorithm, automatically, it thought something did not look right
  2. Automatically, for the hell of it, kind of like an IRS audit without any particular triggers (1%!)
  3. Complaints. Legitimate or Otherwise– FB Does Not Care.
  4. Bogus Complaints by cyber saboteurs.
    (Amazon experiences this!)
  5. Your own failed Login attempts etc.My Advice? Persist and Move On!
  6. Consider the huge fines wages against FB and how they make money and why:
    – their first obligation is to their shareholders
    – Their revenue is threefold:
    Targeted ads
    Aggregated data
    Personal Data. It’s this last one that got their wrist smacked with a $1B+ fine
    Shareholder tried to get rid of Mark Zuckerberg but as he is a majority shareholder– they cannot

Google Edible Websites

Basics of making your site Found By Google
This is technically called “SEO” (search Engine Optimizing) about which tomes have been written.
But without these basics below- None of that will help you.

You are wasting time and money buying Google Ad Words etc.
if you do not first make your website attractive to Google’s organic (free) search algorithms,   and by inference, most other Search sites.
You would not stick a costly billboard in a ditch or swamp-
so do the homework, and do it with your own site.
We assume you have some sort of website UP already.
So Let’s Check it

As of May 2019: (things change rapidly!)

1) – You cannot afford NOT to have an SSL/Secure site certificate installed. you ONLY need a free Domain Certificate issued by the likes of   regardless of what you are doing at the site, even if you are not taking payments or using any forms. CHECK YOUR SITE using the SSL link:
httpS://yoursite… *AND* http://yoursite.
Site must work whichever you use.
Your Internet provider will help.
If they cannot or will not help, then they ave no business selling pages or hosting. It should NOT cost much or even anything to do it
More from Google Here

2) Check Redirections– if the non secure site http://yoursite… is used it should REDIRECT (301) any page visited to the SECURE version
Here is Google’s take on that. Try My Site:
(Opens a new tab- close it after test)
Note it starts: HTTP:/… But redirects to HTTPS://

3) Ideally site would treat usage the same whether or not “www..” is in front of the name by redirecting as above, normally to the SHORTER name, but either way will do. Mine goes to “www…” for historical reasons : )

4) The site must have robots.txt and a sitemap.xml.
Ideally files have some clever entries but first and foremost–
must EXIST.
They are publicly readable on any site that uses them  which will be most of them.
You can find yours by using your Base Site, removing all page URLs and adding a slash and /robots.txt IE for My site, it’s:

5) Site must use at least one “Sitemap” which is a friendly Directory Page to help Google know what’s new,  important,  relevant or changed rather than their having to ‘crawl’ you site as in the old days
The default name is: sitemap.xml and contains considerable technical blurb, and may look odd if you try to view it. But it should EXIST
Try yours by adding /sitemap.xml to the end of your domain name
Mine is here, for example:

6) If you are using the likes of WordPress, Joomla!, Drupal, etc. much, but not all of this trivia is taken care of FOR you.
But Check Check Check it anyway, and assume nothing.
Plug ins and add-ons like Yoast!
take care of much of but not all of this
Sites like WIX, SQUARESPACE  tend to do this for all you anyway with no action required, but CHECK IT ALL ANYWAY

7) Buying Google AdWords is IN ADDITION to all the above and is a bonus way to gain hits… But BEFORE you do this, Goggle provides many free tools with names like Google Analytics.
You should learn about how Google sees your site already.
It is NOT a cat and mouse game as it used to be–
Google is quite helpful about how to best interest their search algorithms, ergo Google users.
You will not surprisingly need a free Google account to do this, after which you can sign up for the free Webmaster Tools
From there on follow Google’s instructions on how to verify you own site you are offering to them and then Add your verified site to the google  “Search Console”.

8) *TIME*. a New Website will need to be around awhile before it has some provenance and interest for Google, it’s not easy to force this issue, not should it be.

9) Links In: Page & Brin’s first patent on google- ‘Page Rank’
Find ways to get people or pages to link *IN* to your site as an ‘authority’ referenced by other well ranked sites.
If they are well-respected it will rub off on you. and Don’t Cheat or pay shysters with Promises. Scamming De-Ranks You!
From here on out it is a dizzying array of tweaks and suggestions about INCREASING your ability to be found-
I ALWAYS try using Google in Anonymous Mode in Chrome (so Cookies will not be used and I shall see Searches as a fresh visitor would) and even use a free VPN proxy to appear to come from ‘elsewhere’  (Free 500MB per month) on the Internet as some searches involve regional/locale  factors and you can see things as ‘others’ see them elsewhere this way

Learn from the Google Webmaster Tools what words people are using to find you. Consider that most searches are fairly generic and will NOT be looking for your special “Celo Polka Cola” website,
(it does exist and it’s not mine!)
so that it Should be findable as “Unusual cola” “rare soda brand” etc “novelty pop” etc– you get the idea..!

That’s Enough For Now.
Suggestions Welcomed!

Cormorants at Hoover Dam

While looking down the 700 foot face drop of the Hoover dam from the top and squinting at the swirly water coming out of the turbines far below,  just where it continues down stream, I noticed some rather inconspicuous black birds with narrow wings.

They were swirling around In the updrafts and air eddys from the strong wind rushing upstream over the river in the gorge  and lifting up the concrete face of that there dam.

They do not seem to be very good flyers and flapped like crazy but the updraft of air took care of everything and their tatty looking bodies, Feathers all ruffled, were whisked in ever rising circles dramatically ever higher towards me at the top

The movements were too erratic and they were too tiny to Capture on film so I just enjoyed  trying to figure out what they were as the each of about four whirled into view…

About the point where they zoomed over the crest of the dam & over my head, I marveled  to see that they were actually cormorants,

Normally deep diving and fishing skinny black waterbirds, weak flyers not known to make good use of the their air mode of transport, and definitely not soaring birds such as condors gulls And Eagles.

Several of them then shot behind the dam then downward into the calm waters Of Lake Mead below where they continued peacefully fishing and diving there.

Ironically they don’t have oil in their feathers so they don’t even float very well as ducks might

This is what makes them great divers.

I thought “Jonathan Livingston cormorant” for a moment

And then rephrased that in my head :“against all odds “

How these birds learn to do this is anybody’s guess but my feeling is that flying anywhere over that water they inevitably get blown up the dam face by the powerful up currents ..

Yet I saw no other birds doing this.

So they might as well enjoy and the outcome is not bad.

I guess they fly out away from the dam face to get down again, but I didn’t see this.

I doubt theirs is a one-way journey.

The dam itself being artificial is very smooth and there’s very little turbulence, unlike there would be on a rugged natural cliff face where they would be spun to death by the air rotors..

Natural selection at work?

Be interesting to stick around to see!

They have had since the 1930s to Get selected 🙂

[April 2019]