Saturday, November 29, 2014

Useful tips for remote connectivity/networking with Linux Mint

In previous posts I've been talking about VNC, X11, SSH, etc... Here a few left over things that are the icing on the cake:
  1. Waking up from LAN or WOL: basically, your server doesn't have to be up all the time. You can wake it up remotely, from suspend or hibernate, even from your cell phone. Worked flawlessly.
  2. How to setup Snort, a lightway network intrusion detection system. http://www.aboutdebian.com/snort.htm, http://www.informit.com/articles/article.aspx?p=21777&seqNum=3,
  3. Fail2ban scans log files for various services ( SSH, FTP, SMTP, Apache, etc., ) and bans the IP that makes too many password failures.
  4. To complement the above, you can also use Tripwire which will monitor/log changes to the files/directories.
  5. ifconfig to setup the network interface.
  6. To create a network share with Samba 
  7. How to manipulate your ethernet card
  8. List of known ports
  9. Very nice brief tutorial on ports, what they are, etc...
  10. To find the name of a server behind an IP address (reverse lookup):  nslookup 208.77.188.166. You can also use the web https://ipdb.at   
I have not work with some of these, so, as I go through them I may have a tutorial only on that.

Cheers!

Friday, November 28, 2014

Setting up VNC in Linux Mint, tunneling over SSH and connecting remotely from Windows

As we saw in my previous post, the result of working through X11 is ok, but not very pretty (although I have a feel that there is more to it, so, if you know, please comment :) ). If you want to see your desktop from another machine, the next level up is to use Vino (VNC) but many folks warn about security risks with this, as VNC per-se does not have encryption. This also requires for you to keep a session open/logged in the Linux machine (it is basically sharing the desktop). That is not the case with Xterm. It is just one more user connecting to the server.

Anyhow here is the way (I would advice to read first the whole thing so that you get an overall picture, before starting with the details):
  1. To remotely access the box from Windows. Mint comes with VNC server Vino, for full graphical access. Also, check this video.
  2. To launch the server in the terminal: /usr/lib/vino/vino-server
  3. To change preferences use vino-preferences in the terminal or type remote in the start menu (and desktop sharing will show up). 
  4. Nevertheless, that does not set all the preferences! I was having an issue connecting with TightVNC or with RealVNC because I didn't the right encryption. Type  gsettings set org.gnome.Vino require-encryption false to disable encryption.
  5. To check that the server is running and waiting: netstat -ln -A inet. To understand netstat you can check Wikipedia. Examples for Unix can be found here and here. For the DOS version, which is still cool, click here.
  6. A funny one... download gvncviewer (vnc client for linux) and run it with your own machine address. It will just go in a crazy loop exporting himself... You can close it by closing the top window.
Anyhow this is all good and easy to test within our own network, without having to go through a firewall or connecting remotely to the server in a network with dynamic IP address.

Nevertheless, as mentioned, VNC does not have encryption. Also, trying to remotely connect to a machine, for instance, at home, supported by DSL where the service provider changes your IP address every now and then, is also a challenge. So, we tackle here those two problems.

For the first, to increase security, you will want to connect VNC through SSH (tunnel). Here is the way I did, but there are also this link, this and this one (that I found but didn't use). Trying to understand this better, I found this nice link. The way I understand it:
  1. The trick lies on a very nice capability from SSH. Basically, we can request the SSH client to listen on a particular port on our local machine, and forward that down the secure connection to a port on a machine at the other end. I.e., step 2 and step 3 here are telling to the SSH client to:
    1. Listen  to a given port (the VNC port 5900) in the local machine. When we launch VNC in the local machine, we will give it an IP address (127.0.0.1) which will be explained later, and by default, VNC tries to connect to the port 5900, so, there is where the SSH client listens to, to "catch" those attempts.
    2. If there is anything, forward it to the SSH server on the other side and tell it to pass that to the right machine and the right port. In my case, the right machine is the same running SSH, and by default, I got the VNC server running on the default port (5900).
  2. So, again , in those steps, the "source port" is the port on the local machine that PuTTY listens to, and the "destination" is the server address and the port where you want that information to be forwarded to.
  3. Now, instead of having the VNC client (RealVNC or whatever), connect to the remote address ("destination"), we tell it to connect to our own machine VNC server (There is actually not such a thing, just that PuTTY will be listening there). This is done by entering the address 127.0.0.1, which is a standard address to indicate loopback. I.e., the packets that come and go from the computer network adapter are routed in and out at the network adapter level, back into or out of the machine, without ever touching the external network (LAN). See this. So, the VNC client doesn't even realize of the trick.
  4. Notice that in the remote network, where the server is sitting, gated by a given firewall, we don't need to open the VNC port on that firewall, but just the SSH port.
For the 2nd problem, on doing a VNC to a machine in a network with changing IPs, one can read this. Basically, you got to register your domain name in some server in the Internet and then, have your network, somehow, report back its IP periodically in case it has changed. So, two things are needed here.
  1. First the dynamic DNS (DDNS) server. A good list of free ones can be found here. Basically register your domain name (whatever the name you want it to be) to your network IP. The domain name may be a full name (that you may have got somewhere) or a partial name, where you pick a portion of the full domain name and the rest is assigned by you DDNS company. I am using duckdns.org, and my server domain name will be something like myname.duckdns.org.
  2. Second, your network has to report back the IP periodically, to that server, so, that the server can now it has changed. This can be done by the router of the network (D-LINK is an example of that and they even provide the DDNS server for free) or, if your router can't do that, by a computer within the network, thanks to some kind of app that the DDNS company will give you. In my case, duckdns.org gives you a step by step installation for different platforms. Really easy. In my case, using Linux, my server will use crontab to periodically send the IP to the DDNS server (duckdns.org).
Note: somehow my router seems to be a branding of a D-LINK but not sure it is working, so, I am using the crontab.

Finally, now that we got all this running, it wouldn't hurt to step up the security and get some other cool stuff going. Check my lastest post on this topic...

Cheers!

Getting X11 setup in Mint and connecting with Windows

On my previous post I explained how to connect remotely to your box through SSH. Nevertheless, that was limited to a text terminal type interface, i.e., no access to graphical applications. So, now that we got the ssh running, we want to be able to get a GUI through that pipe, i.e., a remote graphical interface to our Linux box. I basically followed these steps:
  1. You got to make sure that you got the right software on the client (Windows on my case) machine (I use Xming) and on the server side (xorg). In my case, I had to install Xming. Xorg was there. I also installed xterm but I don't think it is required (?)
  2. Installing Xming was really straightforward and didn't require any configuration. When you run it will stay on the background (you can see it as a process).
  3. Then you can launch PuTTY (see my previous post on SSH). Make sure X11 forward option is there (see link above...)
  4. Once Xming is running, you should be able to launch graphical applications...  
Note: if for any reason this does not work, delete all .Xauthority* files from your home directory, reinstall Xorg (sudo apt-get --purge remove xorg and sudo apt-get install xorg) and login again.

I am not an expert on X Window system and have not researched the details at all, but if I had to guess, basically the client (Xming) gets commands (through the SSH connection) that tell it what to paint in the screen. Xorg is the one sending those. I guess the commands are kind of high level to minimize network bandwidth. Also, the client sends commands to interact with the windows.

If anybody knows of a simple explanation/link, please comment :)

I believe there are also configuration options that can improve the quality of what you get. Nevertheless, in my case, I just jumped all the way to VNC. This is not ideal, though, as it does require to have an open session on the server. Bandwidth requirements are also likely higher. So, I may come back and create  further post on this.

Also in VNC one has to be careful with security/lack of encryption. Anyhow, that will be described in my next post...

Cheers!!

Setting up SSH in your Linux box

I start here a series on methods to connect remotely to your Linux machine. To start with, regardless of what method you are going to use, you will have to watch out with any firewall setup between the client and the server. Linux uses iptables to decide what to do with a packet. Mint has a firewall software based on them, that allows to setup the rules easier, called ufw (or gufw for the graphical version). To begin, if you are disconnected/isolated from the Internet and want to start playing with the below, you may want to simply turn it off. This is just to get things running... of course, turn it on later. For a guide on gufw please see this.

So, back to the different methods, let's start here by SSH (Secure Shell) which allows for probably the lowest mode of interaction but it is encrypted. A quick way to describe it is that it looks like telnet... Nevertheless, we will see later that it does allow you to export a graphic terminal into the machine. Basically SSH would take care of the security of the data, regardless of what that data is.
  1. Note: ssh can be used to set up the server/daemon or to connect somewhere else, as a client. Here we are interested in the server as I want to do the connection from a Windows PC... 
  2. See link and scroll down for SSH instructions.
  3. The instructions above are not great, so, I'll try to recreate them here quickly... You can check in this directory /etc/init.d to see if ssh is there. If not, you got to install it: sudo apt-get install openssh-server. More instructions here
  4. Use sudo service ssh start (or stop, or restart, or status) to control ssh. Doing ssh in the init.d didn't work for me... One thing to notice is that once installed, the server will remember and at boot, it'll be there already...
  5. Use ps -A | grep ssh to see if the daemon is running.
  6. Use sudo nmap localhost to see if there is a port open for SSH. This is limited to the first 1000 ports. If you want to see beyond that use, for instance, sudo nmap -p 1-50000 localhost. nmap may not list it as ssh but as whatever it thinks it should be, though. Depends on how deep (check options) you want it to check. To check what it is, you better use something like sudo nmap -sV -p 22000-22010 localhost (in this case narrow it down to 22000 to 22010) and you may get what the port reported (you may see there OpenSSH...). Note: there is a graphical version called Zenmap.
  7. A link on the use of ssh, good one for more security (basically change the default port). Usually the port is 22 but you can change it on the config file (sudo subl /etc/ssh/sshd_config). Notice that we use sshd_config and not ssh_config (for the client). See here
  8. How to check if ssh is running on a remote machine.
  9. How to use ssh to tunnel your information (send in a way that the middle man can't see/touch it).
  10. How to generate keys so that one doesn't need the password to login.
  11. ssh options 
  12. ssh-keygen is a utility that creates a public key. ssh-keygen -lf filename shows the key. In our case, we just go to the /etc/ssh and type ssh-keygen -lf ./ssh_host_rsa_key.pub. That will show us the key. When we first login from Windows with PuTTY it will ask us if that server is trustworthy and show us the key, which should match the above.
  13. Finally download Windows PuTTY and connet to the Linux machine. It should look like a terminal window... If you try to get graphics (for instance, launching an app with graphical interface, like typing vino-preferences), you'll get an error back saying "Could not open X display".
  14. Note: as described above, if you got gufw ON, then remember to add a rule to let whatever port you opened for SSH pass. 
  15. If you want to have the capability of having several screens corresponding to the same terminal window, you can install Screen. Very quick to get a handle of. See tutorial here. Quick reference:
    1. Once you type screen in your ssh, you are inside screen (looks like normal terminal)
    2. ctrl+a sends a message to Screen saying that the next character is a command. Typical commands are ? (for list of commands), n (for next screen) and a number (to go to the right screen (when you do that you see below the list of screens).
    3. type exit to quit
So, there you go! On the next post, we will talk about how to get programs with graphical interfaces going...

Good luck!

Saturday, November 15, 2014

Linux commands

This is a list that is just growing as I learn, for my future reference... I am pretty sure there must be nicer, more organized list out there...
  1. ls to list files.
  2. alias gives an alias to an instruction. For instance, alias ll='ls -al'
  3. cat displays the content of a file without stopping at the end of the page.
  4. more does the same thing but stops at the end of the page. Space will full page fwd. Enter will one line fwd.
  5. less similar to more with more functionality (like seeing previous page by pressing b). ctrl+z to quit.
  6. ps -aux | less Show all processes, long info, one screen at a time
  7. top displays top CPU processes
  8. sudo in front of any command to bypass permissions 
  9. nmap localhost to scan open ports. More details here, here and here. For instance, to increase verbosity level add -v.
  10. file -bi yourfile.html tells you the encoding of your file.
  11. ifconfig tells your IP address among other network stuff... 
  12. Backing up local linux host and remote linux host
  13. Creating and managing users
  14. Managing password expiration
  15. man command gives the manual for that command
  16. chown change the owner of a file
  17. sudo poweroff to completely power off the machine
  18. sudo pm-suspend
  19. sudo pm-hibernate but by default, you can't do it.
  20. sudo reboot
  21. sudo apt-get --purge remove <package>
  22. sudo apt-get install <package>
  23. rm -rf <directory> removes recursively a directory and all the files within. f indicates not to prompt for confirmation. 
  24. which - shows the full path of (shell) commands. 
  25. swapon -s To check the size of the swap space. Good set of commands on swap. What is swap or this one.
  26. at HH:MM executes something at a given time. It'll prompt a cursor for you to enter the command to execute. You can add as many and exit with ctrl+D. atq will show all pending tasks. 
  27. free tells you the memory use
  28. sudo fdisk -l lists the partitions. Or use lsblk or blkid. Basic partition intro here, and more on partitions here. It doesn't hurt to refresh what cylinders, sectors, etc... are. 
  29. inxi will give a lot of system information (hardware, drivers...). -G tells you about the graphics. -b is basic info. 
  30. tail - output the last part of files  
  31. sudo iotop -a Shows read/write process activity
  32. lshw -short lists all the hardware...
  33. lsblk lists the drives (block type devices attached)
  34. dd if=/dev/sda of=~/hdadisk.img to backup the disk (need su)
  35. cp -r dir1/ ~/Pictures/ copy dir1 contents into Pictures directory in the home directory
  36. rm -rf mint Delete mint directory and all stuff included on it
  37. alias hdd='cd /media/edusson/SP\ PHD\ U3' Type hdd to change to /media/edusson/SP PHD U3 directory. Notice the use of \ as escape code for the empty space.
  38. sudo blkid (for all partitions) or sudo blkid /dev/sdb1 to check if a given partition is encrypted.
Editors:
  1. vi editor. Basic commands. Press : to get to the command. q! will exit without saving.
  2. nano quick editor
  3. subl more fancy editor. Go to Synaptic package manager, type sublime in the search, then click on the box to install and apply changes... Or go to their website, download the linux installer untar, etc... and install...
Bash (Bourne Again shell):
  1. Use & at the end of a command to launch it independent of terminal.
  2. history shows the history of commands. tricks1 and tricks2
  3. Double click on something to highlight it. Right click to paste it on the prompt.
  4. Break a Unix command between lines with \
  5. Press tab to autofill what is left
System variables:
  1. See a variable: echo $variable_name. For instance, echo $SHLVL
  2. Set a variable with export. For intance, export PS1="\u@\h \w> "
  3. SHLVL: level of bash you are in. See here.
  4. PS1, PS2, PS3 and PS4: prompt statement. Lots of options.
  5. PROMPT_COMMAND: Bash shell executes the content of the PROMPT_COMMAND just before displaying the PS1 variable. For instance, export PROMPT_COMMAND="date +%k:%m:%S"
  6. HOME
  7. PATH
 Boot order (in Mint):
  1. On GRUB (bootloader)
  2. /etc/bash.bashrc
  3. /etc/bash_completion (called by the previous one if it exists)
  4. /usr/share/bash-completion/bash_completion. Called by the previous one, this file is huge.
  5. ~/.profile executed by the command interpreter for login shells. This file is not read by bash(1), if ~/.bash_profile or ~/.bash_login exists.
At exit:
  1. ~/.bash_logout
Directories/folders:
  1. home or also indicated as ~. The base directory for the specific user
  2. proc: running processes, drivers, hardware info...
  3. /etc/default: default settings for the programs
  4. /var/log: system log files To see them you can just use the GUI log viewer
  5. /proc directory. Very special virtual directory

Notes:
  1. Suspend and hibernate with Linux, a whole other beast. Start here. Check also my two posts on the topic...
  2. Good explanation to .so, .a, .dll etc...
How to:
  1. Plug a USB drive: plug it (duh!), if not visible in the file manager, check lsusb. If visible, go to Disks (type that on the start search menu) and Mount it (click play on my version). 
  2. Backup: use Timeshift
  3. If somehow the screen freezes (you may still see the mouse moving but nothing reacts), or if you ever got locked and were unable to log back in, switch to console with CTRL+ALT+F1, log in, and type “killall cinnamon-screensaver” (or “killall mate-screensaver” in MATE). Use CTRL+ALT+F7 or CTRL+ALT+F8 to get back to your session.

References:
  1. http://www.linfo.org/
  2. 50 Linux admin commands

Ten things to do when you just got mint:
https://www.youtube.com/watch?v=h_rIFm5Ygw0

Update the System:
sudo apt-get update
sudo apt-get dist-upgrade

Yakuake dropdown terminal:
sudo apt-get install yakuake

Get old wallpaper:
sudo apt-get install mint-backgrounds-*

Install Flash Player:
sudo apt-get install flashplugin-installer

Install Restricted Extras:
sudo apt-get install ubuntu-restricted-extras

Codecs & Enable DVD Playback:
sudo apt-get install gstreamer0.10-plugins-ugly libxine1-ffmpeg gxine mencoder libdvdread4 totem-mozilla icedax tagtool easytag id3tool lame nautilus-script-audio-convert libmad0 mpg321 libavcodec-extra

Enable DVD Playback:
sudo /usr/share/doc/libdvdread4/install-css.s­h

Install Dropbox:
sudo apt-get install dropbox python-gpgme

Skype:
sudo apt-get install skype

wget -O skype.deb http://download.skype.com/linux/skype...
sudo dpkg -i skype.deb
sudo apt-get -f install;rm skype.deb

(64bit) fix skin issue with this command:
sudo apt-get install gtk2-engines-murrine:i386 gtk2-engines-pixbuf:i386 sni-qt:i386

Install rar and other archiving utilities:
sudo apt-get install p7zip-rar p7zip-full unace unrar zip unzip sharutils rar uudeview mpack arj cabextract file-roller

Install a clipboard manager:
sudo apt-get install clipit

Hardinfo - System information tool:
sudo apt-get install hardinfo

Install Firewall
sudo apt install gufw

Cleanup:
sudo apt-get autoremove

ODE Installation in Linux Mint

So, I got my new Linux (Mint) box up and going to install ODE (open dynamics engine/physics library). Here are the steps for my future reference (ODE manual here). Boy, this certainly wasn't straightforward for me...:
  1. Download code from SourceForge
  2. Uncompress within the Eclipse workspace. Yeah, not sure if best place but...
  3. Make sure you have installed a compiler like g++. For that you can do which g++. If not sudo apt-get install g++
  4. Make sure you have glut installed (these are the OpenGL libraries needed for the drawstuff part, and it took me a while to find till I got across this good one). Just execute
    sudo apt-get install freeglut3 freeglut3-dbg freeglut3-dev (.h/includes get installed in /user/include/GL, .so/.a/libraries get installed in /usr/lib/x86_64-linux-gnu/). More on OpenGL directories here.
  5. Follow the install from tarball instructions... Basically run ./configure --enable-double-precision to generate the make files. At the end of the run there should be a line that says "configuration:" and a list of things installed. You should see "Use drawstuff: X11" there too. Basically I think configure creates the right makefiles.
  6. make will make the make files
  7. sudo make install After this one can see in /usr/local/lib the ode libraries libode.a and libode.al. Also in /usr/local/include/ode a bunch of .h files.
  8. Nevertheless drawstuff is MIA. Again, following the instructions here we need to cp those files: 
    1. sudo mkdir -p /usr/local/include/drawstuff
    2. sudo cp include/drawstuff/version.h /usr/local/include/drawstuff
    3. sudo cp include/drawstuff/drawstuff.h /usr/local/include/drawstuff
    4. sudo cp drawstuff/src/libdrawstuff.la /usr/local/lib
    5. sudo cp drawstuff/src/.libs/libdrawstuff.a /usr/local/lib
  9. Run the eclipse. If for the first time use this command: ./eclipse -initialize
  10. Create a c++ project
  11. Copy the demo file .cpp from ode/demo directory into the workspace project directory and do refresh in Eclipse (I used the demo_buggy demo).
  12. Add at the beginning of the .cpp file the following includes (they may not be there because the example may have not been targeting Linux):
    • #define dDOUBLE
    • #include <X11/Xlib.h>
    • #include <GL/glut.h>
    • #include <ode/ode.h>
    • #include <drawstuff/drawstuff.h>
    • #include "texturepath.h"
  13. Add the following paths (in Eclipse, right click on the project --> Properties --> C/C++ General --> Paths and Symbols):
    1. In the Includes tab/GNU C++: add /usr/local/include (the ode and drawstuff includes are in there). FYI, /usr/include has most of the others, including X11 and GLUT (OpenGL).
    2. On the "Properties" page, click "C/C++ Build→Settings". Under the "Tool Settings" tab, click "GCC C++ Linker→Libraries" (Fig. 10)
    3. On the "Libraries (-l)" pane click the add file () button (Fig. 11)
    4. On the popup dialog box type "GL" and click "OK"
    5. Repeat the above two steps to add "GLU" and "glut" libraries
    6. Add also X11. Notice that all these libraries are actually called libxxx with the xxx whatever. For instance, libX11.a, not X11.a. But we, nevertheless, list them without the "lib" portion. See here.
    7. And just when I thought I was done, I still needed to add this one: pthread. I was getting this error: "Description Resource Path Location Type /usr/local/lib/libode.a(atomic.o): undefined reference to symbol 'pthread_mutexattr_init@@GLIBC_2.2.5' demo_buggy C/C++ Problem"
    8. In the Library Paths add:
      1. /usr/local/lib contains libdrawstuff.a and libode.a
      2. /usr/lib/x86_64-linux-gnu/ containing libGL and libGLU
      3. A "funny" thing is that I was trying to find where libX11.a is, but it shows in the /proc directory (/proc/3921/cwd). Proc contains virtual directories/files. They are created at run time. In this case, 3921 is the process for the bash. So, somehow, we don't need to add any path for libX11.a
    9. Clicked run and still had one silly message saying that "a program file was not specified in the launch configuration". Fixed with this. And GOOOO!!!!
Seems that we are set to PLAY NOW! :D

Other references:
  1. Some good advice to run ODE in Linux
  2. Hiro Aki blog
  3. 3D graphics tutorials:
    1. http://www.arcsynthesis.org/gltut/
    2. http://www.tomdalling.com/blog/category/modern-opengl/
    3. http://www.codeincodeblock.com/2013/07/modern-opengl-32-tutorial-series-with.html 
  4. Don't think this has to do with Mint or ODE, but in Eclipse, after I copy the project (ctrl+C, ctrl+V, rename the cpp) the whole file is marked with red warnings/errors. It is solved by going to properties --> C/C++ General --> Indexer and then mark Enable project specific settings, Enable indexer, Use active build configuration.

Sunday, November 9, 2014

Linux Mint goes into hibernate but boots back fresh (with no hibernate saved info)

So, it does look like it goes into hibernate, but when it comes back after boot, it simply looks like a fresh new boot, nothing left from the previous session.

So, I followed these steps (besides trying the stuff that didn't work below).
In my case I could check the log and see the difference between a successful suspend and a failed hibernate. I used: more /var/log/pm-suspend.log

You can see that the part of the log corresponding to the suspend finishes with:
...
Running hook /usr/lib/pm-utils/sleep.d/99video suspend suspend:
kernel.acpi_video_flags = 0
/usr/lib/pm-utils/sleep.d/99video suspend suspend: success.

Running hook /etc/pm/sleep.d/novatel_3g_suspend suspend suspend:
/etc/pm/sleep.d/novatel_3g_suspend suspend suspend: success.

Sat Nov  8 18:32:07 CST 2014: performing suspend


And immediately after, the next lines are:
Sat Nov  8 18:49:03 CST 2014: Awake.
Sat Nov  8 18:49:03 CST 2014: Running hooks for resume

Running hook /etc/pm/sleep.d/novatel_3g_suspend resume suspend:
/etc/pm/sleep.d/novatel_3g_suspend resume suspend: success.

Running hook /usr/lib/pm-utils/sleep.d/99video resume suspend:
/usr/lib/pm-utils/sleep.d/99video resume suspend: success.

Running hook...


I.e., one can see how the machine saves stuff and goes into suspend at 18:32, and at 18:49 was awaken and starts restoring the stuff on the reverse order than it saved.

Scrolling up on that log, we can see a previous hibernate attempt we had performed right before this suspend, where it did the same thing and finished on the same peripherals...
Running hook /usr/lib/pm-utils/sleep.d/99video hibernate hibernate:
/usr/lib/pm-utils/sleep.d/99video hibernate hibernate: success.

Running hook /etc/pm/sleep.d/novatel_3g_suspend hibernate hibernate:
/etc/pm/sleep.d/novatel_3g_suspend hibernate hibernate: success.

Sat Nov  8 18:01:55 CST 2014: performing hibernate


... but never came back. The next line in the peripherals was the suspend sequence described before. At this moment, after being upset with Linux because of hibernate not working, I have to say that at least, this log is kick-ass... I guess that's one reason to like Linux :)

So, moving on...

The step by step debugging post would be blaming the shut down sequence because the hibernate saving didn't finish with something like:
"Sun Jul 24 13:15:14 HST 2011: Finished."

Not sure that is the case, as the suspend sequence doesn't really have that... MMMMmmmm...  So, I am going to move on into the restoring part (his 3rd case).

The basic problem looks like GRUB (the boot loader) doesn't know where to look for the swap partition. So, I just followed the steps on the the same link. All what I needed to do was:

sudo gedit /etc/default/grub

Look for the line that starts with.  GRUB_CMDLINE_LINUX
Add the following option in blue.

GRUB_CMDLINE_LINUX=”resume=UUID=Your-SWAP-UUID other-option=value”

where in my case, my Your-swap-UUID was b71d9df1-1b6a-4320-9046-0ec62a49cea0

To find out what number is that you can do cat /etc/fstab

"You may or may nor have other options. Be careful that you type in the correct path or UUID for the drive here. If you don’t then when you try to resume you will have to boot in safe mode to fix the drive name and re-run the update-grub script."

Then save the file and update grub with the following command:

sudo update-grub

Just one last step (which in my case, by chance I had done when following this partial solution which now I updated :) ) Note: when I did it, I was not 100% sure I had done it properly cause I was getting an error...

sudo gedit /etc/initramfs-tools/conf.d/resume

And add the line:

resume=UUID=Your-SWAP-UUID

Again, your number. Note: notice that the link actually had hardwired that number (obviously just forgot...).

Then run this command to update:

sudo update-initramfs -u  

Now hibernate your computer and cross your fingers!

In my case it did work (otherwise why I would post this!?! :) ) and now I can see the correct log file where sure enough "Finished" was not there...

Cheers!

PS.: Other things that didn't work for me but may help you:
  1. This to see what the PC monitors during hibernate
  2. Or this, adding some fancy code... I think this portion is similar to one of the steps in the link

Saturday, November 8, 2014

Linux Mint freezes after suspend

Tried few stuff on the internet (increase swap, some .sh to put in the pm folder...) but didn't work. Finally in my case, it was the NVIDIA nouveau driver. I was using this because it worked better for VNC (with the recommended driver, the one I am using now, VNC was hanging some times).

So, back to the recommended by the Driver Manager:
nvidia-331
version 331.38-0ubuntu7.1
NVIDIA binary driver - version 331.38

Will move into debugging the VNC issue next :(

FYI:
edusson@XX ~ $ inxi -G
Graphics:  Card: NVIDIA G86 [GeForce 8500 GT]
           X.Org: 1.15.1 drivers: nvidia (unloaded: fbdev,vesa,nouveau) Resolution: 1280x720@60.0hz
           GLX Renderer: GeForce 8500 GT/PCIe/SSE2 GLX Version: 3.3.0 NVIDIA 331.38

Cheers!

PS.: FYI, my hibernate was not working either. It used to go into hibernate but when it came back, it looked like it just booted. No old session... It was unrelated to this, found the solution and posted it here.

Thursday, November 6, 2014

Mint swap partition problem

To the point, installed Mint but when I do free I can see that the swap partition has zero blocks free or available. All zero... Checking sudo swapon -s shows also no swap. And if I do sudo swapon -a I get a "/dev/mapper/cryptswap1: stat failed: No such file or directory".

This is what I did. First make sure you got a partition that you can apply swap. Easiest for me was to use gparted . The partition was there but it was not linux-swap. I right click on it and format to linux-swap.

Also, right clicking on it and checking information one could see it didn't have a UUID. Not sure if you need to reboot somewhere around here, but I did.

After the reboot I believe I should have been able to see the UUID with gparted but don't remember checking. I used blkid and there it was. Wrote down the UUID. More on seeing UUIDs here.

Now edit fstab: sudo subl fstab (in /etc). Find what the computer things is the UUID for the swap partition and enter the real one. Save. Also, edit /etc/crypttab. Delete the contents. Save and exit. Found those steps here.

Now you should be able to do sudo swapon -a and after that -s and see the swap partition. Somehow the computer had the wrong UUID for that partition.

By the way, now you can use cat /proc/sys/vm/swappiness to see how much of the RAM we would like to move to swap disk. 100 is all of it. Default is 60 but when I do free, I can see it all empty. Maybe I am not doing much now? Check SwapFAQ to see how to change it. Truth is that in my case, with Firefox, LibreOffice and Gimp opened, still I don't seem to need the swap (free shows zero usage).

Hope it helps!

Saturday, October 4, 2014

Time to change my GTI MK5 air filters...

Just to clarify, I am referring to the cabin air filter:
  1. https://www.youtube.com/watch?v=ZJg5BwEPiDI 
  2. https://www.youtube.com/watch?v=KXHL48X7kmg
That is pretty straight fwd except for the crawling under the passenger seat. To put back in the filter put it first and then put the cover that snaps, otherwise it's harder. Probably took 10 min first time I did it.

And to the intake air filter:

What you need to know is that the filter is below the engine cover. So, you need to get the cover out first. Follow this. Then take the screws out of the cover, open it up, replace the filter, close it (making sure the hinges get back it, which is the difficult part) and un-do the previous steps. To snap the cover back in, in my case, it just went in by putting pressure, but maybe I was lucky...

This other video is not very good, but FYI https://www.youtube.com/watch?v=damY97JjTQ4

This one took me (first time) about 1hr (I didn't know how to remove the engine cover till found the video...).

Cheers!


Monday, June 9, 2014

3D effect without glasses

The following shows how to give the phone or tablet user the sensation of looking at an object in 3D by just using the front facing camera. I believe that this is a similar technique to what the Amazon phone is using to display 3D (see this link, just guessing as phone not out till June 18th). See more background at the end of the post. In the mean time, let's just jump into the technique.

Basically with the phone front facing camera we recognize where the viewer's head is respect to the screen and present the objects on the screen from the viewers perspective. With this I create a 3D effect without glasses (no stereo vision, though, just the angle, which is powerful enough).

The code is relatively simple but unfortunately I do not know how to access some low level stuff, so, work comes into getting things running fast with some hacks... I tried my best but still one can see some lagging... Check out this video. Disclaimer: this is just an experiment. I had only ~20 pictures for the full angle of view, I didn't really adjust the angle of those pics to the angle perceived from the front camera (I eye balled that...) and it was kind of tough to record with the other phone while moving... a camera attached to my head would have been nice for this, but anyhow, gives the idea... :).

The top level structure is:

Load in memory all potential pictures, taken a priori from the potential viewer's perspective, so that they can be presented real time as fast as possible (limited by my knowledge :P)

Remember to place your pictures in the root of the SDCard + "/DCIM/3D" or modify that part of the code.

Notice that we save in memory the encoded pictures. This is a trade-off between storing the full raw data (see my first attempt on this topic here), which would be faster as it wouldn't need real time decoding, but would require much larger memory; and not storing anything, which saves all the memory but it is much slower (read from flash + decode).

Capture the camera image

Capturing the image is something pretty trivial in Android. Nevertheless, in our case we want to capture an image but present something completely unrelated to that image. Somehow, Android doesn't seem to support that in a well documented way. I have a post on that here.

I took the same approach as I did here. The general real time image capture framework is done with OpenCV. From OpenCV tutorial: "Implementation of CvCameraViewListener interface allows you to add processing steps after frame grabbing from camera and before its rendering on screen. The most important function is onCameraFrame. It is callback function and it is called on retrieving frame from camera."

Search for the face

This phase, together with displaying the image, are the ones limiting the rendering speed. To speed it up, ideally I wanted to use the "embedded" method that comes with the phone. I.e., the one that is showing a square around the faces when you are using the camera app that comes with my phone (an HTC One). It seems to be fast and reliable. Unfortunately I do not know how to access it.

The next method down (and the one I used) is the one that comes with the Android SDK (see code below and more details here).

The last method in our tool set is the OpenCV approach. See details here.

Compute the viewer's angle respect to the display. This is pretty straightforward, so, just check the code... Ideally you got to adjust this well to the angle of the pictures you have taken but I didn't really do the effort.

Present that image Based on the angle, pick the right picture from memory to be shown, decode it and present it. As explained on the first phase, this is not trivial to do with minimum lag.

Without more delay, let's go into the code:

_3DActivity.java
 /*  
  * Working demo of face detection (remember to put the camera/phone in horizontal)  
  * using OpenCV as framework, with Android Face recognition.  
  * AS GOOD AS IT GETS. Still not that smooth. Probably will do better when we  
  * present a graph with OpenGL.  
  */  
 package com.cell0907.TDpic;  
 import java.io.File;  
 import java.util.Arrays;  
 import org.opencv.android.BaseLoaderCallback;  
 import org.opencv.android.CameraBridgeViewBase.CvCameraViewFrame;  
 import org.opencv.android.CameraBridgeViewBase.CvCameraViewListener2;  
 import org.opencv.android.LoaderCallbackInterface;  
 import org.opencv.android.OpenCVLoader;  
 import org.opencv.android.Utils;  
 import org.opencv.core.Core;  
 import org.opencv.core.CvException;  
 import org.opencv.core.CvType;  
 import org.opencv.core.Mat;  
 import org.opencv.core.MatOfByte;  
 import org.opencv.core.MatOfInt;  
 import org.opencv.core.Scalar;  
 import org.opencv.core.Size;  
 import org.opencv.highgui.Highgui;  
 import org.opencv.imgproc.Imgproc;  
 import org.opencv.core.Point;  
 import android.app.Activity;  
 import android.graphics.Bitmap;  
 import android.graphics.BitmapFactory;  
 import android.graphics.PointF;  
 import android.media.FaceDetector;  
 import android.media.FaceDetector.Face;  
 import android.os.Bundle;  
 import android.os.Environment;  
 import android.util.Log;  
 import android.view.Menu;  
 import android.view.MenuItem;  
 import android.view.SurfaceView;  
 import android.view.WindowManager;  
 public class _3DActivity extends Activity implements CvCameraViewListener2 {  
   private static final int         VIEW_MODE_CAMERA  = 0;  
   private static final int         VIEW_MODE_GREY   = 1;  
   private static final int         VIEW_MODE_FACES  = 2;  
   private static final int         VIEW_MODE_3D    = 3;  
   private MenuItem             mItemPreviewRGBA;  
   private MenuItem             mItemPreviewGrey;  
   private MenuItem             mItemPreviewFaces;  
   private MenuItem             mItemPreview3D;  
   private int               mViewMode;  
   private Mat               mRgba;  
   private Mat               mGrey;  
   private int                              screen_w, screen_h;  
   private Tutorial3View            mOpenCvCameraView;   
   //private Bitmap[]      mImageCache; // A place to store our pics       
   private MatOfByte[]     mImageCache; // A place to store our pics in jpg format  
   private int           numberofitems;  
   private int           index;  
   private BaseLoaderCallback mLoaderCallback = new BaseLoaderCallback(this) {  
     @Override  
     public void onManagerConnected(int status) {  
       switch (status) {  
         case LoaderCallbackInterface.SUCCESS:  
         {  
           // Load native library after(!) OpenCV initialization  
           mOpenCvCameraView.enableView();        
         } break;  
         default:  
         {  
           super.onManagerConnected(status);  
         } break;  
       }  
     }  
   };  
   public _3DActivity() {  
   }  
   /** Called when the activity is first created. */  
   @Override  
   public void onCreate(Bundle savedInstanceState) {  
     super.onCreate(savedInstanceState);  
     getWindow().addFlags(WindowManager.LayoutParams.FLAG_KEEP_SCREEN_ON);  
     setContentView(R.layout.tutorial2_surface_view);  
     mOpenCvCameraView = (Tutorial3View) findViewById(R.id.tutorial2_activity_surface_view);  
     mOpenCvCameraView.setVisibility(SurfaceView.VISIBLE);  
     mOpenCvCameraView.setCvCameraViewListener(this);  
     index=0;  
   }  
   @Override  
   public void onPause()  
   {  
     super.onPause();  
     if (mOpenCvCameraView != null)  
       mOpenCvCameraView.disableView();  
   }  
   @Override  
   public void onResume()  
   {  
     super.onResume();  
     OpenCVLoader.initAsync(OpenCVLoader.OPENCV_VERSION_2_4_3, this, mLoaderCallback);  
   }  
   public void onDestroy() {  
     super.onDestroy();  
     if (mOpenCvCameraView != null)  
       mOpenCvCameraView.disableView();  
   }  
   public void onCameraViewStarted(int width, int height) {  
        screen_w=width;  
        screen_h=height;  
     mRgba = new Mat(screen_w, screen_h, CvType.CV_8UC4);  
     mGrey = new Mat(screen_w, screen_h, CvType.CV_8UC1);  
     load_images();  
     Log.v("MyActivity","Height: "+height+" Width: "+width);  
   }  
   public void onCameraViewStopped() {  
     mRgba.release();  
     mGrey.release();  
   }  
   public Mat onCameraFrame(CvCameraViewFrame inputFrame) {  
        long startTime = System.nanoTime();  
        long endTime;  
        boolean show=true;  
        mRgba=inputFrame.rgba();  
        if (mViewMode==VIEW_MODE_CAMERA) {  
             endTime = System.nanoTime();  
          if (show==true) Log.v("MyActivity","Elapsed time: "+ (float)(endTime - startTime)/1000000+"ms");  
             return mRgba;  
        }  
        if (mViewMode==VIEW_MODE_GREY){             
             Imgproc.cvtColor( mRgba, mGrey, Imgproc.COLOR_BGR2GRAY);   
             endTime = System.nanoTime();  
          if (show==true) Log.v("MyActivity","Elapsed time: "+ (float)(endTime - startTime)/1000000+"ms");  
             return mGrey;  
        }  
        // REDUCE THE RESOLUTION TO EXPEDITE THINGS  
        Mat low_res = new Mat(screen_w, screen_h, CvType.CV_8UC4);  
        Imgproc.resize(mRgba,low_res,new Size(),0.25,0.25,Imgproc.INTER_LINEAR);  
        Bitmap bmp = null;  
        try {  
          bmp = Bitmap.createBitmap(low_res.width(), low_res.height(), Bitmap.Config.RGB_565);  
          Utils.matToBitmap(low_res, bmp);  
        }  
        catch (CvException e){Log.v("MyActivity",e.getMessage());}  
           int maxNumFaces = 1; // Set this to whatever you want  
           FaceDetector fd = new FaceDetector((int)(screen_w/4),(int)(screen_h/4),  
                  maxNumFaces);  
           Face[] faces = new Face[maxNumFaces];  
           int numFacesFound=0;  
           try {  
                  numFacesFound = fd.findFaces(bmp, faces);  
             } catch (IllegalArgumentException e) {  
                  // From Docs:  
                  // if the Bitmap dimensions don't match the dimensions defined at initialization   
                  // or the given array is not sized equal to the maxFaces value defined at   
                  // initialization  
                  Log.v("MyActivity","Argument dimensions wrong");  
             }  
           if (mViewMode==VIEW_MODE_FACES) {  
                if (numFacesFound<maxNumFaces) maxNumFaces=numFacesFound;  
                for (int i = 0; i < maxNumFaces; ++i) {  
                     Face face = faces[i];  
                     PointF MidPoint = new PointF();  
                     face.getMidPoint(MidPoint);  
                     /* Log.v("MyActivity","Face " + i + " found with " + face.confidence() + " confidence!");  
                       Log.v("MyActivity","Face " + i + " eye distance " + face.eyesDistance());  
                       Log.v("MyActivity","Face " + i + " midpoint (between eyes) " + MidPoint);*/  
                     Point center= new Point(4*MidPoint.x, 4*MidPoint.y);  
                     Core.ellipse( mRgba, new Point(center.x,center.y), new Size(8*face.eyesDistance(), 8*face.eyesDistance()), 0, 0, 360, new Scalar( 255, 0, 255 ), 4, 8, 0 );  
                }  
                endTime = System.nanoTime();  
                if (show==true) Log.v("MyActivity","Elapsed time: "+ (float)(endTime - startTime)/1000000+"ms");  
                return mRgba;  
                //return low_res;  
        }          
           // 3D  
           if (numFacesFound>0){  
                Face face = faces[0];  
                PointF MidPoint = new PointF();  
                face.getMidPoint(MidPoint);  
                int face_x=4*(int)MidPoint.x;  
                // The face can show up from x0=k.screen_w to x1=(1-k)screen_w  
                // index=A.face_x+B  
                // 0 = A.k.screen_w + B  
                // N = A.(1-k).screen_w + B where N=numberofitems-1  
                // Therefore:  
                // A=N/((1-2k).screen_w)  
                // B=-A.k.screen_w=-N.k/(1-2k)  
                int N=numberofitems-1;  
                double k=0.1;  
                double A=N/((1-2*k)*screen_w);  
                double B=-N*k/(1-2*k);  
                index=(int)Math.floor(A*face_x+B);                 
                index=numberofitems-index-1;  
                //Log.v("MyActivity","x: "+face_x+" index: "+index);  
                if (index<0) index=0;  
                if (index>numberofitems-1) index=numberofitems-1;  
           }  
           //mImageCache[index] is a array of bytes containing the jpg  
           mRgba=Highgui.imdecode(mImageCache[index],Highgui.CV_LOAD_IMAGE_COLOR);  
           endTime = System.nanoTime();  
           if (show==true) Log.v("MyActivity","Elapsed time: "+ (float)(endTime - startTime)/1000000+"ms");  
           //Log.v("MyActivity","Index: "+index);  
           return mRgba;  
    }  
   @Override  
   public boolean onCreateOptionsMenu(Menu menu) {  
     mItemPreviewRGBA = menu.add("RGBA");  
     mItemPreviewGrey = menu.add("Grey");  
     mItemPreviewFaces = menu.add("Faces");  
     mItemPreview3D = menu.add("3D");  
     return true;  
   }  
   public boolean onOptionsItemSelected(MenuItem item) {  
     if (item == mItemPreviewRGBA) {  
       mViewMode = VIEW_MODE_CAMERA;  
     } else if (item == mItemPreviewGrey) {  
       mViewMode = VIEW_MODE_GREY;  
     } else if (item == mItemPreviewFaces) {  
       mViewMode = VIEW_MODE_FACES;  
     } else if (item == mItemPreview3D) {  
       mViewMode = VIEW_MODE_3D;  
     }  
     return true;  
   }    
   //LOAD IMAGES  
   void load_images(){  
     //android.hardware.Camera.Size r=mOpenCvCameraView.getResolution();  
        String root = Environment.getExternalStorageDirectory().toString();  
           File myDir = new File(root + "/DCIM/3D");   
        File[] file_list = myDir.listFiles();   
        Arrays.sort(file_list);          // Otherwise file order is unpredictable  
        numberofitems=file_list.length;  
        //mImageCache=new Bitmap[numberofitems];  
        mImageCache=new MatOfByte[numberofitems];  
        Mat temp3 = new Mat(screen_w, screen_h, CvType.CV_8UC4);  
        MatOfInt compression_params=new MatOfInt(Highgui.CV_IMWRITE_JPEG_QUALITY,50);  
        Log.v("MyActivity","NOI: "+numberofitems);  
        for (int i=0;i<numberofitems;i++){  
             try{  
                  mImageCache[i]=new MatOfByte();  
                  Log.v("MyActivity","i: "+i);  
                  Bitmap temp1=BitmapFactory.decodeFile(file_list[i].getPath());  
                  Bitmap temp2=Bitmap.createScaledBitmap(temp1, screen_w , screen_h, true);  
                  Utils.bitmapToMat(temp2,temp3);  
                  //Log.v("MyActivity","w: "+temp3.width()+" l: "+temp3.height());  
                  Highgui.imencode(".jpg", temp3, mImageCache[i],compression_params);  
                  Log.v("MyActivity","Length: "+mImageCache[i].total());  
             } catch (Exception e) {  
                e.printStackTrace();  
                   Log.v("MyActivity", "L: Error loading");  
              }  
        }  
   }  
 }  

Tutorial3View.java
 package com.cell0907.TDpic;  
 import java.io.FileOutputStream;  
 import java.util.List;  
 import org.opencv.android.JavaCameraView;  
 import android.content.Context;  
 import android.hardware.Camera;  
 import android.hardware.Camera.PictureCallback;  
 import android.hardware.Camera.Size;  
 import android.util.AttributeSet;  
 import android.util.Log;  
 public class Tutorial3View extends JavaCameraView implements PictureCallback {  
   private static final String TAG = "MyActivity";  
   private String mPictureFileName;  
   public Tutorial3View(Context context, AttributeSet attrs) {  
     super(context, attrs);  
   }  
   public List<String> getEffectList() {  
     return mCamera.getParameters().getSupportedColorEffects();  
   }  
   public boolean isEffectSupported() {  
     return (mCamera.getParameters().getColorEffect() != null);  
   }  
   public String getEffect() {  
     return mCamera.getParameters().getColorEffect();  
   }  
   public void setEffect(String effect) {  
     Camera.Parameters params = mCamera.getParameters();  
     params.setColorEffect(effect);  
     mCamera.setParameters(params);  
   }  
   public List<Size> getResolutionList() {  
     return mCamera.getParameters().getSupportedPreviewSizes();  
   }  
   public void setResolution(Size resolution) {  
     disconnectCamera();  
     mMaxHeight = resolution.height;  
     mMaxWidth = resolution.width;  
     connectCamera(getWidth(), getHeight());  
   }  
   public Size getResolution() {  
     return mCamera.getParameters().getPreviewSize();  
   }  
   public void takePicture(final String fileName) {  
     Log.i(TAG, "Taking picture");  
     this.mPictureFileName = fileName;  
     // Postview and jpeg are sent in the same buffers if the queue is not empty when performing a capture.  
     // Clear up buffers to avoid mCamera.takePicture to be stuck because of a memory issue  
     mCamera.setPreviewCallback(null);  
     // PictureCallback is implemented by the current class  
     mCamera.takePicture(null, null, this);  
   }  
   @Override  
   public void onPictureTaken(byte[] data, Camera camera) {  
     Log.i(TAG, "Saving a bitmap to file");  
     // The camera preview was automatically stopped. Start it again.  
     mCamera.startPreview();  
     mCamera.setPreviewCallback(this);  
     // Write the image in a file (in jpeg format)  
     try {  
       FileOutputStream fos = new FileOutputStream(mPictureFileName);  
       fos.write(data);  
       fos.close();  
     } catch (java.io.IOException e) {  
       Log.e("PictureDemo", "Exception in photoCallback", e);  
     }  
   }  
 }  

AndroidManifest.xml
 <?xml version="1.0" encoding="utf-8"?>  
 <manifest xmlns:android="http://schemas.android.com/apk/res/android"  
      package="com.cell0907.TDpic"  
      android:versionCode="21"  
      android:versionName="2.1">  
       <supports-screens android:resizeable="true"  
            android:smallScreens="true"  
            android:normalScreens="true"  
            android:largeScreens="true"  
            android:anyDensity="true" />  
   <uses-sdk android:minSdkVersion="8"   
               android:targetSdkVersion="10" />  
   <uses-permission android:name="android.permission.CAMERA"/>  
   <uses-feature android:name="android.hardware.camera" android:required="false"/>  
   <uses-feature android:name="android.hardware.camera.autofocus" android:required="false"/>  
   <uses-feature android:name="android.hardware.camera.front" android:required="false"/>  
   <uses-feature android:name="android.hardware.camera.front.autofocus" android:required="false"/>  
   <uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE"/>  
   <application  
     android:label="@string/app_name"  
     android:icon="@drawable/icon"  
     android:theme="@android:style/Theme.NoTitleBar.Fullscreen"  
     android:allowBackup="false">  
     <activity android:name="_3DActivity"  
          android:label="@string/app_name"  
          android:screenOrientation="landscape"  
          android:configChanges="keyboardHidden|orientation">  
       <intent-filter>  
         <action android:name="android.intent.action.MAIN" />  
         <category android:name="android.intent.category.LAUNCHER" />  
       </intent-filter>  
     </activity>  
   </application>  
 </manifest>  

Notice the manifest android.permission.READ_EXTERNAL_STORAGE and android.permission.CAMERA

And tutorial2_surface_view.xml
 <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"  
   xmlns:tools="http://schemas.android.com/tools"  
   xmlns:opencv="http://schemas.android.com/apk/res-auto"  
   android:layout_width="match_parent"  
   android:layout_height="match_parent" >  
   <com.cell0907.TDpic.Tutorial3View  
     android:id="@+id/tutorial2_activity_surface_view"  
     android:layout_width="match_parent"  
     android:layout_height="match_parent"  
     opencv:camera_id="1"  
     opencv:show_fps="false" />  
 </LinearLayout>  

Finally, as promissed, some background on this project.

A friend that I had shown this app back in January just sent me this link. It is about the Amazon phone which we believe that it is using the same trick as I do here... Of course, not saying that I was the first one to come up with this idea. Probably somebody had the same thought before. Other folks have used similar tricks (like using a Kinect or a hack of the Wiimote) to sense where the viewer's head is respect to the display and present the right image.

Using those approaches they also have a better/real/full-3D location of the eyes/face/head respect to the display which allows for even a better effect. With one camera you can find only the angle of the face respect to the display, but not the distance (although somebody could argue that you can use the size of the face to estimate that...). Amazon probably solves that with the use of few cameras/triangulation.

Another aspect for improvement is that if there are several viewers in the field of view (FOV) of the camera, then it can get confused respect who to show the image. You could still present respect to one as long as you track the same face, which is one level above what I do.

I also don't do vertical tracking, only horizontal, to the sides... No biggy... Just didn't have enough pics. To simplify the picture taking part, I was thinking to use OpenGL/virtual world, instead of real life pictures, but never finished that... That will certainly be faster to render to.

Finally, I am sure the final effect in Amazon's phone will be a production ready thing, a better effect than I got, I hope, lol! (disclaimer :) ).

Anyhow, just posting this to claim my bragging rights, no matter how small those may be :P

Cheers!

PS.: Please check the following links for a full index of OpenCV and Android posts with other super duper examples :P

Wednesday, April 9, 2014

HSA reporting

One more year filling in the taxes and this year, I happen to have an HSA. So, this is how I entered the info (disclaimer, not sure if right or wrong):
  • In the W2 the employer puts all contributions (employer and employee), although they call it in the wording "employer contributions". Kind of distracting...
  • You also should have got
  • Then you got to fill in form 8889. In my case, very simple, single, etc... I had to put:
    • Box 3, 5, 6 and 8: $3250. Basically max I could contribute.
    • Box 9, 11: what was showing on the W2. Contributions from your employer (which include yours done directly from payroll). In my case $2500. Check the form 5498-SA that you should have received from your broker.
    • That makes box 12 $750. I.e., that's what we could have contributed but we didn't.
    • And then on distributions (what you took from the HSA) I had 14a, 14c and 15 as $727. All mine are qualified medical expenses... You should have got form 1099 from your HSA broker. Just look it up...
Basically straightforward stuff. Do not duplicate the stuff on your W2. It is weird because somehow, I put $2k out of the $2.5k contributed, but I didn't have to write this anywhere... So, did the $500 from my employer also give me a tax break? I think basically so. As all this comes from the employer, it is already accounted for on the W2. Notice that on the 1040 nothing shows up on box 25... This box is only for contributions other than employer contributions.

One thing to notice is that as you fill in stuff in TaxAct or TurboTax, it considers the HSA contribution as income. Then, when you enter finally the HSA form, it removes them, giving you a break on the amount of tax your owe.

HSA FAQ
Other link...

Sunday, March 16, 2014

Simple plan for Barcelona visit - Day B - Eixample / Gaudi

For this B day, we will be in the area of Barcelona called the Example and also a bit further towards the mountain. Please see here for other locations in Barcelona. Except for Parc Guell, the other locations are buildings, so, ok to visit during bad weather... Nevertheless, Sagrada Familia may close the visit to the tower (just to the tower) in that case... Also, they are nice at night (but open only during day time...):
  1. Sagrada Familia: this is one of the most emblematic icons of the city. You can buy the tickets on-line. Not sure if that will save you the line outside but I heard that although long, it is not too bad... Unless you are in a tight budget, I do think it is worth to go in. Go up in the tower is nice too. I can talk hours about this place, but to save you that, I would strongly recommend you to read the Wikipedia article to get a good background. Nice article about the towers and which one to visit.
  2. Pedrera and Casa Batllo: walking distance from Sagrada Familia (see the black line in the map below). Famous spots of the city too, feel free to go in if your budget/time allows.
  3. To get to Park Guell, another must-see place, I usually take subway green line (L3) to Vallcarca station. The line kind of goes along the mountain/sea axis for a while, down Gracia and Ramblas, so, you can take it from any station there (Passeig de Gracia, Catalunya, Liceu, Drassanes...). Once you get to Vallcarca walk a bit along Vallcarca Avenue, sea direction, till you hit Baixada de la Gloria, on your left (red line in the map). It is a street full of mechanical scalators (feel free to walk too :) ) that will take you to the South side of the Park. I feel that is more interesting than getting a taxi up there, but hey... Unfortunately, the city started charging to enter on the park around 2016 (can't remember, but used to be free) but you can get there earlier or later than the running hours and see it for free. In the morning you can see the sun raise (assuming you can wake up in time :) ), so, it is much better than in the evening, when it may get dark depending on the time of the year and you can't see anything in the park (no lights).

Check the map here

Safe travels!!

Saturday, March 15, 2014

Simple plans to visit Barcelona

Few friends asked me about what to do/go in Barcelona, so, here is a summary of some stuff... I broke it by days, so, it makes it easier to plan things...

Note: I hate to start by this, but watch with the pickpockets... Few friends of mine got stolen their wallet or purse, so... Other than that, I would consider Barcelona to be very safe.

For a 3 days trip I would think something like this (sorry not finished, I'll add later). Order of days does not matter. Probably plan according to weather forecast :)
But if you have 2, it gets tougher. So, here is one potential thought:
  1. Day 1: Sagrada Familia (day B above) / Park Guell (day B above)
  2. Day 2: Day C above.
  3. On whatever time after hours/night you have, you can see the outside of Pedrera and Casa Batllo (day B above) and the Magic Fountains (day A above).
Cheers!
PS.: Sorry that I can't include everything, as Barcelona is full of stuff to see and do. If you want more info, these are some quick sites I found:

Simple plan for Barcelona visit - Day A - Montjuic Area

Please see here for overall index for other areas of town.

Note 1: The order you do this is a bit up to you. If you got your hotel close to this area, you can just do this anytime. But if not, you may want to plan to finish at night and stop by the Magic Fountains.
Note 2: times indicate my guess on how much somebody would spend there...
Note 3: click on the map below and a Google Maps route should open up.
  • Spanish Square (Plaza de España / Plaça d'Espanya). Subway stop, red line (#1). Head towards the two towers you see there, which mark the Fira (trade fair) entrance and the path to the mountain (most of the stuff to see in the area...).
  • Las Arenas: in the Spanish Square (actually, one exit of the subway is there), it is a bullfighting ring remodeled into a mall. You can go to the roof top (for free if you do it from inside the mall) and enjoy the views, food, etc... or the shopping, if you like :). Maybe something to do when you want to take a break.
  • Olympic ring: 1-2hr Take a stroll around... There are mechanical stairs up to here, from Plaza España:
    • Estadio Olimpico. Check the cauldron where this happened
    • Estadio de Sant Jordi - Basketball
    • Torre de Calatrava - I like this one
  • Castillo de Montjuic (a castle to city and sea views): you can walk to here or take a lift - 1hr
  • Poble Espanyol: a display of other regions in Spain.
  • Diving Pools (Piscinas de salto) : Where during the Olympics you could see this beautiful background. I believe now most of the time they are close so you can only see from the fence...
  • Greek Theater Gardens - 30 min
  • Magic Fountains: A must see, at night. - 1hr... till you get tired. Show/music changes along the night... I copy schedule below in case the link gets broken but it may get out of date (so, better to Google it). Notice that during most of the year, they only run them from Th. through Saturday, so, plan a visit to this area in that time of the week...
    • Operating hours from 30th April to 30th September:
      • Thursday to Sunday, 9pm – 11:30pm
      • Musical displays: 9pm, 9:30pm, 10pm, 10:30pm and 11pm
    • Operating hours from 1st October to 30th April:
      • Fridays and Saturdays, 7pm – 9pm
      • Christmas and Easter:
      • Thursday to Sunday, 7pm – 9pm
      • Musical displays: 7pm, 7:30pm, 8pm and 8:30pm
  • B-Hotel: I am not endorsing this hotel by no means and I make no money with this site... Do your research. But some friends of mine were there and liked it, specially the rooftop pool! Truth is that no one has so far complained about their hotel, wherever they stayed. 
  • There are many other things to do and see in the area: Miro Museum and many others (see the links on the right side of this page)


View Montjuic route in a larger map

Hope you enjoy!!

Thursday, March 6, 2014

Recovering an edited email file attachment

My life just went by my eyes... I had been working the whole day adding notes and corrections to a pdf file. A work that I actually hate and I was looking fwd to finish. I was clicking save while not realizing that I had not made a copy to disk. Was not getting any errors, though... Then I close it and hit me (with scenes of my infancy, chills, and thoughts of how stupid I can be). Well, if you are here, probably you have the same symptoms.

I have to say that I was lucky and found this
Bottom line, the file is in your Internet file folder but it will not show with search. Follow the link to find it.

Cheers! Now you can continue with your life :)

Tuesday, February 25, 2014

Virtual Desktop on Windows 7

Just for my reference... See this article
I decided to install VirtuaWin and so far so good. Was easy to install and operate.
Some people thought this one was very lightweight and easy but haven't tried it.

Thursday, January 30, 2014

Engineering quotes that I agree with...

“The path to the CEO's office should not be through the CFO's office, and it should not be through the marketing department. It needs to be through engineering and design.”
-- Elon Musk, CEO and chief product architect, Tesla Motors

“The worst thing in the world that can happen to you if you’re an engineer that has given life to something is for someone to rip it off and put their name on it.”
-- Tim Cook, CEO of Apple Inc.

“So here we have pi squared, which an engineer would call 10.”
-- Frank King, cartoonist, creator of Gasoline Alley

“You go to a technology conference or an engineering conference, there are very few women there. At the same time, it’s a blessing in the fact that you do get noticed. People tend to remember you as the only woman in the room ‘who said that’ or the only woman in the room who was an engineer.”
-- Padmasree Warrior, CTO of Cisco Systems, former CTO of Motorola Inc.

“Let’s face it: Engineering companies in general have more men than women. Google has tried really hard to recruit women. On the other hand, we have a standard. Google tries to recruit the best engineers.”
-- Susan Wojcicki, senior vice president in charge of product management and engineering at Google

Source: got these from many others in the following link.

Few more:
"I do not know one millionth part of one percent about anything"
-- Thomas Edison
I do not know one millionth part of one percent about anything - Thomas Edison - See more at: http://www.youcaring.com/medical-fundraiser/team-isaac-/82832#sthash.dgvEZBq8.dpuf
I do not know one millionth part of one percent about anything - Thomas Edison - See more at: http://www.youcaring.com/medical-fundraiser/team-isaac-/82832#sthash.dgvEZBq8.dpuf
I do not know one millionth part of one percent about anything - Thomas Edison - See more at: http://www.youcaring.com/medical-fundraiser/team-isaac-/82832#sthash.dgvEZBq8.dpuf


Saturday, January 25, 2014

Detect faces with Android SDK, no OpenCV

By now, we have tackled this in many ways :). Usually with OpenCV. See a list of posts here...
Now we are going to basically take this post, where we detected the faces with OpenCV in Android, and replace the face detection portion (CascadeClassifier + detectMultiScale) and replace it for FaceDetector.findFaces method in Android SDK. So, everything else is the same as that post, only thing is that we need to:
  1. Convert Mat to Bitmap on the right format for the face detection method (RGB_565).
  2. Do the face detection with the Android SDK.
We also reduce the original image resolution so that the detection happens much faster, as we did in the original post. So, although I copy here all the code so you don't have to go back and forth, the part changing is what goes after VIEW_MODE_GRAY inside the "public Mat onCameraFrame(CvCameraViewFrame inputFrame)" method. Notice that we still use all the framework from OpenCV to capture and display the image, and not the Android SDK approach.

Note: it seems that the FaceDetector used here is not the one my built-in camera app is using. I know this from simple performance evaluation. For instance, if I rotate the camera, the camera app still detects my face but FaceDetector loses it. It seems that there is one more way in the SDK to detect faces starting from Android 4.0 which I have not tried (so, don't now its performance). Still, probably that is not what is used in the camera app. This post here points to the same and has no answer, in case anybody wants to get some StackOverflow points :). I agree though that using the built-in camera app would likely narrow my software down to my phone.

_3DActivity.java
 /*  
  * Working demo of face detection (remember to put the camera in horizontal)  
  * using OpenCV/CascadeClassifier.  
  * Posted in http://cell0907.blogspot.com/2014/01/detecting-faces-in-android-with-opencv.html  
  */  
 package com.cell0907.td1;  
 import org.opencv.android.BaseLoaderCallback;  
 import org.opencv.android.CameraBridgeViewBase.CvCameraViewFrame;  
 import org.opencv.android.CameraBridgeViewBase.CvCameraViewListener2;  
 import org.opencv.android.LoaderCallbackInterface;  
 import org.opencv.android.OpenCVLoader;  
 import org.opencv.android.Utils;  
 import org.opencv.core.Core;  
 import org.opencv.core.CvException;  
 import org.opencv.core.CvType;  
 import org.opencv.core.Mat;  
 import org.opencv.core.Scalar;  
 import org.opencv.core.Size;  
 import org.opencv.imgproc.Imgproc;  
 import org.opencv.core.Point;  
 import android.app.Activity;  
 import android.graphics.Bitmap;  
 import android.graphics.PointF;  
 import android.media.FaceDetector;  
 import android.media.FaceDetector.Face;  
 import android.os.Bundle;  
 import android.util.Log;  
 import android.view.Menu;  
 import android.view.MenuItem;  
 import android.view.SurfaceView;  
 import android.view.WindowManager;  
 public class _3DActivity extends Activity implements CvCameraViewListener2 {  
   private static final int         VIEW_MODE_CAMERA  = 0;  
   private static final int         VIEW_MODE_GRAY   = 1;  
   private static final int         VIEW_MODE_FACES  = 2;  
   private MenuItem             mItemPreviewRGBA;  
   private MenuItem             mItemPreviewGrey;  
   private MenuItem             mItemPreviewFaces;  
   private int               mViewMode;  
   private Mat               mRgba;  
   private Mat               mGrey;  
   private int                              screen_w, screen_h;  
   private Tutorial3View            mOpenCvCameraView;   
   private BaseLoaderCallback mLoaderCallback = new BaseLoaderCallback(this) {  
     @Override  
     public void onManagerConnected(int status) {  
       switch (status) {  
         case LoaderCallbackInterface.SUCCESS:  
         {  
           // Load native library after(!) OpenCV initialization  
           mOpenCvCameraView.enableView();        
         } break;  
         default:  
         {  
           super.onManagerConnected(status);  
         } break;  
       }  
     }  
   };  
   public _3DActivity() {  
   }  
   /** Called when the activity is first created. */  
   @Override  
   public void onCreate(Bundle savedInstanceState) {  
     super.onCreate(savedInstanceState);  
     getWindow().addFlags(WindowManager.LayoutParams.FLAG_KEEP_SCREEN_ON);  
     setContentView(R.layout.tutorial2_surface_view);  
     mOpenCvCameraView = (Tutorial3View) findViewById(R.id.tutorial2_activity_surface_view);  
     mOpenCvCameraView.setVisibility(SurfaceView.VISIBLE);  
     mOpenCvCameraView.setCvCameraViewListener(this);  
   }  
   @Override  
   public void onPause()  
   {  
     super.onPause();  
     if (mOpenCvCameraView != null)  
       mOpenCvCameraView.disableView();  
   }  
   @Override  
   public void onResume()  
   {  
     super.onResume();  
     OpenCVLoader.initAsync(OpenCVLoader.OPENCV_VERSION_2_4_3, this, mLoaderCallback);  
   }  
   public void onDestroy() {  
     super.onDestroy();  
     if (mOpenCvCameraView != null)  
       mOpenCvCameraView.disableView();  
   }  
   public void onCameraViewStarted(int width, int height) {  
        screen_w=width;  
        screen_h=height;  
     mRgba = new Mat(screen_w, screen_h, CvType.CV_8UC4);  
     mGrey = new Mat(screen_w, screen_h, CvType.CV_8UC1);  
     Log.v("MyActivity","Height: "+height+" Width: "+width);  
   }  
   public void onCameraViewStopped() {  
     mRgba.release();  
     mGrey.release();  
   }  
   public Mat onCameraFrame(CvCameraViewFrame inputFrame) {  
        long startTime = System.nanoTime();  
        long endTime;  
        boolean show=true;  
        mRgba=inputFrame.rgba();  
        if (mViewMode==VIEW_MODE_CAMERA) {  
             endTime = System.nanoTime();  
          if (show==true) Log.v("MyActivity","Elapsed time: "+ (float)(endTime - startTime)/1000000+"ms");  
             return mRgba;  
        }     
        if (mViewMode==VIEW_MODE_GRAY){  
          Imgproc.cvtColor( mRgba, mGrey, Imgproc.COLOR_BGR2GRAY);           
          endTime = System.nanoTime();  
          if (show==true) Log.v("MyActivity","Elapsed time: "+ (float)(endTime - startTime)/1000000+"ms");  
             return mGrey;  
        }  
        // REDUCE THE RESOLUTION TO EXPEDITE THINGS  
        Mat low_res = new Mat(screen_w, screen_h, CvType.CV_8UC4);  
        Imgproc.resize(mRgba,low_res,new Size(),0.25,0.25,Imgproc.INTER_LINEAR);  
        Bitmap bmp = null;  
        try {  
          bmp = Bitmap.createBitmap(low_res.width(), low_res.height(), Bitmap.Config.RGB_565);  
          Utils.matToBitmap(low_res, bmp);  
        }  
        catch (CvException e){Log.v("MyActivity",e.getMessage());}  
           int maxNumFaces = 1; // Set this to whatever you want  
           FaceDetector fd = new FaceDetector((int)(screen_w/4),(int)(screen_h/4),  
                  maxNumFaces);  
           Face[] faces = new Face[maxNumFaces];  
           try {  
                  int numFacesFound = fd.findFaces(bmp, faces);  
                  if (numFacesFound<maxNumFaces) maxNumFaces=numFacesFound;  
                  for (int i = 0; i < maxNumFaces; ++i) {  
                       Face face = faces[i];  
                       PointF MidPoint = new PointF();  
                 face.getMidPoint(MidPoint);  
 /*                      Log.v("MyActivity","Face " + i + " found with " + face.confidence() + " confidence!");  
                       Log.v("MyActivity","Face " + i + " eye distance " + face.eyesDistance());  
                       Log.v("MyActivity","Face " + i + " midpoint (between eyes) " + MidPoint);*/  
                       Point center= new Point(4*MidPoint.x, 4*MidPoint.y);  
                       Core.ellipse( mRgba, new Point(center.x,center.y), new Size(8*face.eyesDistance(), 8*face.eyesDistance()), 0, 0, 360, new Scalar( 255, 0, 255 ), 4, 8, 0 );  
                  }  
             } catch (IllegalArgumentException e) {  
                  // From Docs:  
                  // if the Bitmap dimensions don't match the dimensions defined at initialization   
                  // or the given array is not sized equal to the maxFaces value defined at   
                  // initialization  
                  Log.v("MyActivity","Argument dimensions wrong");  
             }  
           if (mViewMode==VIEW_MODE_FACES) {  
                endTime = System.nanoTime();  
                if (show==true) Log.v("MyActivity","Elapsed time: "+ (float)(endTime - startTime)/1000000+"ms");  
             return mRgba;  
                //return low_res;  
        }          
           return mRgba;  
    }  
   @Override  
   public boolean onCreateOptionsMenu(Menu menu) {  
     mItemPreviewRGBA = menu.add("RGBA");  
     mItemPreviewGrey = menu.add("Grey");  
     mItemPreviewFaces = menu.add("Faces");  
     return true;  
   }  
   public boolean onOptionsItemSelected(MenuItem item) {  
     if (item == mItemPreviewRGBA) {  
       mViewMode = VIEW_MODE_CAMERA;  
     } else if (item == mItemPreviewGrey) {  
       mViewMode = VIEW_MODE_GRAY;  
     } else if (item == mItemPreviewFaces) {  
       mViewMode = VIEW_MODE_FACES;  
     }  
     return true;  
   }    
 }  

For the rest of the files, please check the original post.

Just a final note... Somebody may wonder why I am mixing OpenCV with face detection from Android SDK. I am using OpenCV because it allows me to display something completely unrelated to what the camera is capturing (I need that for my final app). Although I found a method to do that without OpenCV I think it doesn't work with all the devices out there. And I use the face detection from Android SDK because I think it is more robust and still works, for instance, when turning the head... The weird thing is that it doesn't seem to work as good as the one I get on my camera app (the one that comes from factory with my phone, and HTC One). I also thought it would be faster, but actually looks slower...

PS.: Reference I used...