Compare commits
No commits in common. "python3" and "patches" have entirely different histories.
7
.gitignore
vendored
7
.gitignore
vendored
@ -1,8 +1 @@
|
||||
*.swp
|
||||
deployment-config.mk
|
||||
scanner/venv
|
||||
scanner/floatapp/app.cfg
|
||||
*.pyc
|
||||
*.min.css
|
||||
*.min.js
|
||||
.vscode
|
145
README.md
145
README.md
@ -1,61 +1,48 @@
|
||||
# subPhotoFloat – A [Photofloat](https://git.zx2c4.com/PhotoFloat/) fork
|
||||
## Web Photo Gallery via Static JSON & Dynamic Javascript with a Python Backend
|
||||
# PhotoFloat
|
||||
### A Web 2.0 Photo Gallery Done Right via Static JSON & Dynamic Javascript
|
||||
#### by Jason A. Donenfeld (<Jason@zx2c4.com>)
|
||||
|
||||
by Jason A. Donenfeld (<Jason@zx2c4.com>)
|
||||
with some changes by Markus Pawlata (<markus@derdritte.net>) and other [collaborators](https://git.jocke.no/photofloat/log/?h=patches).
|
||||
![Screenshot](http://data.zx2c4.com/photo-float-small.jpg)
|
||||
|
||||
![Screenshot](https://photos.derdritte.net/img/screenshot.png)
|
||||
PhotoFloat is an open source web photo gallery aimed at sleekness and speed. It keeps with an old hat mentality, preferring to work over directory structures rather than esoteric photo database management software. Everything it generates is static, which means it's extremely fast.
|
||||
|
||||
subPhotoFloat is an open source web photo gallery aimed at sleekness and speed. It keeps with a minimalist philosophy, preferring to work over directory structures rather than bloated photo database management software (there are good options available for that, if you want it). Everything it generates is static, which means it's extremely fast.
|
||||
|
||||
[Check out a demo!](https://photos.derdritte.net/#!/random)
|
||||
[Check out a demo!](http://photos.jasondonenfeld.com/#santa_fe_and_telluride_8.19.10-8.27.10/western_202.jpg)
|
||||
|
||||
## How It Works
|
||||
|
||||
subPhotoFloat consists of two main segments – a Python script and a JavaScript application.
|
||||
PhotoFloat consists of two segments – a Python script and a JavaScript application.
|
||||
|
||||
The Python script scans a directory tree of images, whereby each directory constitutes an album. It then populates a second folder, known as the cache folder with statically generated JSON files and thumbnails. The scanner extracts metadata from EXIF tags in JPEG photos. subPhotoFloat is smart about file and directory modification time, so you are free to run the scanner script as many times as you want, and it will be very fast if there are few or zero changes since the last time you ran it.
|
||||
Also a part of the Python script is a small flask webapp which allows you to have authentication for certain albums/images and can start the scanner.
|
||||
The Python script scans a directory tree of images, whereby each directory constitutes an album. It then populates a second folder, known as the cache folder with statically generated JSON files and thumbnails. The scanner extracts metadata from EXIF tags in JPEG photos. PhotoFloat is smart about file and directory modification time, so you are free to run the scanner script as many times as you want, and it will be very fast if there are few or zero changes since the last time you ran it.
|
||||
|
||||
The JavaScript application consists of a single `index.html` file with a single `scripts.min.js` and a single `styles.min.css`. It fetches the statically generated JSON files and thumbnails on the fly from the `cache` folder to create a speedy interface. Features include:
|
||||
|
||||
* Animations to make the interface feel nice
|
||||
* Separate album view and photo view
|
||||
* Album metadata pre-fetching
|
||||
* Photo pre-loading
|
||||
* Recursive async randomized tree walking album thumbnail algorithm
|
||||
* Smooth up and down scaling
|
||||
* Mouse-wheel support
|
||||
* Swipe support (on mobile)
|
||||
* Metadata display
|
||||
* Consistent hash url format
|
||||
* Consistant hash url format
|
||||
* Linkable states via ajax urls
|
||||
* Static rendering for googlebot conforming to the AJAX crawling spec.
|
||||
* Facebook meta tags for thumbnail and post type
|
||||
* Link to original images (can be turned off)
|
||||
* Optional Google Analytics integration
|
||||
* Optional server-side authentication support
|
||||
* A thousand other tweaks here and there...
|
||||
|
||||
It is, essentially, a very slick and fast, fairly minimal but still well-featured photo gallery app.
|
||||
|
||||
## Dependencies
|
||||
|
||||
* python >= 2.6
|
||||
* pillow >= 5.3.0
|
||||
* nginx (or any webserver, really)
|
||||
|
||||
### Optional
|
||||
|
||||
* flask >= 0.11 (for authentication)
|
||||
* flask-login >= 0.4.1 (for authentication)
|
||||
* virtualenv (this is nice, [believe me](https://docs.python-guide.org/dev/virtualenvs/#lower-level-virtualenv))
|
||||
* ffmpeg (for video conversion)
|
||||
It is, essentially, the slickest and fastest, most minimal but still well-featured photo gallery app on the net.
|
||||
|
||||
## Installation
|
||||
|
||||
### Download the source code from the git repository
|
||||
#### Download the source code from the git repository:
|
||||
|
||||
$ git clone https://derdritte.net/gitea/markus/photofloat
|
||||
$ cd photofloat
|
||||
$ git clone git://git.zx2c4.com/PhotoFloat
|
||||
$ cd PhotoFloat
|
||||
|
||||
### Change or delete the Google Analytics ID tracker
|
||||
#### Change or delete the Google Analytics ID tracker:
|
||||
|
||||
To delete:
|
||||
|
||||
@ -67,18 +54,18 @@ To change:
|
||||
|
||||
Modify the part that says UA-XXXXXX-X and put your own in there.
|
||||
|
||||
### Tweak the index.html page to have a custom title or copyright notice
|
||||
#### Tweak the index.html page to have a custom title or copyright notice.
|
||||
|
||||
$ vim web/index.html
|
||||
|
||||
### Build the web page
|
||||
#### Build the web page.
|
||||
|
||||
This simply runs all the javascript through Google Closure Compiler and all the CSS through YUI Compressor to minify and concatenate everything. Be sure you have java installed.
|
||||
|
||||
$ cd web
|
||||
$ make
|
||||
|
||||
### Generate the albums
|
||||
#### Generate the albums:
|
||||
|
||||
Now that we're in the web directory, let's make a folder for cache and a folder for the pictures:
|
||||
|
||||
@ -92,75 +79,28 @@ When you're done, fill albums with photos and directories of photos. You can als
|
||||
|
||||
After it finishes, you will be all set. Simply have your web server serve pages out of your web directory. You may want to do the scanning step in a cronjob, if you don't use the deployment makefiles mentioned below.
|
||||
|
||||
### Nginx configuration for static-only
|
||||
|
||||
Please keep in mind this will not provide any kind of access-restrictions.
|
||||
|
||||
server {
|
||||
listen 80;
|
||||
server_name photos.jasondonenfeld.com;
|
||||
location / {
|
||||
index index.html;
|
||||
root /var/www/htdocs/photos.jasondonenfeld.com;
|
||||
}
|
||||
}
|
||||
|
||||
Now after deploying `/var/www/htdocs/photos.jasondonenfeld.com` should contain:
|
||||
|
||||
* index.html
|
||||
* js/
|
||||
* css/
|
||||
* img/
|
||||
* albums/
|
||||
* cache/
|
||||
|
||||
You can easily manage that by creating a folder structure and copying relevant files over:
|
||||
|
||||
$ cd ..
|
||||
photofloat ~ $ mkdir <deployment-folder>/js <deployment-folder>/css
|
||||
$ cp web/js/scripts.min.js <deployment-folder>/js
|
||||
$ cp web/css/styles.min.css <deployment-folder>/css
|
||||
$ cp -a fonts img index.html <deployment-folder>/
|
||||
|
||||
For easy updates `albums` and `cache` can be set to also live in \<deployment-folder>, this is especially recommended if you are using the optional flask app mentioned in the following section.
|
||||
|
||||
Do **not** keep any of your config or python-files where the webserver can read or write to, the `deployment-config.mk` is most sensitive. If you only want the static html/json & javascript application you are done now!
|
||||
|
||||
## Optional: Server-side Authentication using flask
|
||||
## Optional: Server-side Authentication
|
||||
|
||||
The JavaScript application uses a very simple API to determine if a photo can be viewed or not. If a JSON file returns error `403`, the album is hidden from view. To authenticate, `POST` a username and a password to `/auth`. If unsuccessful, `403` is returned. If successful, `200` is returned, and the previously denied json files may now be requested. If an unauthorized album is directly requested in a URL when the page loads, an authentication box is shown.
|
||||
|
||||
subPhotoFloat ships with an optional server side component called FloatApp to facilitate this, which lives in `scanner/floatapp`. It is a simple Flask-based Python web application.
|
||||
PhotoFloat ships with an optional server side component called FloatApp to faciliate this, which lives in `scanner/floatapp`. It is a simple Flask-based Python web application.
|
||||
|
||||
### Installation
|
||||
#### Edit the app.cfg configuration file:
|
||||
|
||||
We need to install flask and other dependencies, ideally in a virtualenv, as this will keep your system-wide python installation clean and you can easily install more packages or different versions.
|
||||
$ cd scanner/floatapp
|
||||
$ vim app.cfg
|
||||
|
||||
$ cd scanner
|
||||
$ virtualenv venv
|
||||
$ source venv/bin/activate
|
||||
$ pip install -r requirements.txt
|
||||
Give this file a correct username and password, for both an admin user and a photo user, as well as a secret token. The admin user is allowed to call `/scan`, which automatically runs the scanner script mentioned in the previous section.
|
||||
|
||||
### Edit the app.cfg configuration file
|
||||
#### Decide which albums or photos are protected:
|
||||
|
||||
$ vim floatapp/app.cfg
|
||||
$ vim auth.txt
|
||||
|
||||
Give this file a correct username and password for an admin, as well as a secret token. The admin user is allowed to call `/scan`, which automatically runs the scanner script mentioned in the previous section.
|
||||
This file takes one path per line. It restricts access to all photos in this path. If the path is a single photo, then that single photo is restricted.
|
||||
|
||||
In `app.cfg` you may also add elements to the `PERMISSION_MAP` as follows. The dictionary takes any path (to either an album or an image) and restricts any album or image matching that path to the listed tokens.
|
||||
#### Configure nginx:
|
||||
|
||||
…
|
||||
PERMISSION_MAP = {
|
||||
'private': ['thisisatoken'],
|
||||
'alsoprivate/butonlythis.jpg': ['morethan', 'onetoken'],
|
||||
}
|
||||
…
|
||||
|
||||
Tokens can contain anything you wish and you can add as many paths and tokes as you require. One match in the `PERMISSION_MAP` will allow access even if another rule would forbid it. The admin is allowed to see any album or image.
|
||||
|
||||
### Configure nginx
|
||||
|
||||
FloatApp makes use of `X-Accel-Buffering` and [X-Accel-Redirect](https://www.nginx.com/resources/wiki/start/topics/examples/x-accel/) to force the server-side component to have minimal overhead when serving images via flask. Here is an example nginx configuration that can be tweaked:
|
||||
FloatApp makes use of `X-Accel-Buffering` and `X-Accel-Redirect` to force the server-side component to have minimal overhead. Here is an example nginx configuration that can be tweaked:
|
||||
|
||||
server {
|
||||
listen 80;
|
||||
@ -197,44 +137,31 @@ FloatApp makes use of `X-Accel-Buffering` and [X-Accel-Redirect](https://www.ngi
|
||||
}
|
||||
}
|
||||
|
||||
Note that the `internal-*` paths must match that of `app.cfg`, since the flask app will redirect the "external" `/albums` and `/cache` paths to internal ones set in that config.
|
||||
Note that the `internal-*` paths must match that of `app.cfg`. This makes use of uwsgi for execution:
|
||||
|
||||
### Configure uwsgi (or use an alternate wsgi-provider)
|
||||
|
||||
$ cat /etc/uwsgi.d/photofloat.ini
|
||||
metheny ~ # cat /etc/uwsgi.d/photofloat.ini
|
||||
[uwsgi]
|
||||
chdir = /var/www/uwsgi/%n
|
||||
master = true
|
||||
uid = %n
|
||||
gid = %n
|
||||
chmod-socket = 660
|
||||
chown-socket = nginx:nginx
|
||||
chown-socket = %n:nginx
|
||||
socket = /var/run/uwsgi-apps/%n.socket
|
||||
logto = /var/log/uwsgi/%n.log
|
||||
virtualenv = /var/www/uwsgi/photofloat/scanner/venv
|
||||
processes = 4
|
||||
idle = 1800
|
||||
die-on-idle = true
|
||||
plugins = python27
|
||||
module = floatapp:app
|
||||
|
||||
Change the paths for chdir, socket, logto and virtualenv to your preference.
|
||||
Naturally, you can use any of the options available to [deploy](http://flask.pocoo.org/docs/1.0/deploying/#self-hosted-options) the flask app.
|
||||
|
||||
## Optional: Deployment Makefiles
|
||||
|
||||
Both the scanner and the webpage have a `make deploy` target, and the scanner has a `make scan` target, to automatically deploy assets to a remote server and run the scanner. For use, customize `deployment-config.mk` in the root of the project, and carefully read the `Makefile`s to learn what's happening.
|
||||
|
||||
Be aware: you will very likely have to adapt the deploy-instructions to what you have deployed on your server.
|
||||
If you are using the flask-app you will most likely not need the Makefiles.
|
||||
|
||||
## Mailing List & Suggestions
|
||||
|
||||
If you have any suggestions, feel free to contact the subPhotoFloat community via [our mailing list](http://lists.zx2c4.com/mailman/listinfo/photofloat). We're open to adding all sorts of features and working on integration points with other pieces of software.
|
||||
|
||||
Note: As the project is 8+ years old, the mailing list has slowed down a bit, if you do not get an answer immediately, please be patient and give other users some time to respond.
|
||||
|
||||
This app is also fairly small, so this might be the perfect project to try and add some small features yourself. For reference you may want to look at the flask & nginx documentation.
|
||||
If you have any suggestions, feel free to contact the PhotoFloat community via [our mailing list](http://lists.zx2c4.com/mailman/listinfo/photofloat). We're open to adding all sorts of features and working on integration points with other pieces of software.
|
||||
|
||||
## License
|
||||
|
||||
|
1
scanner/.gitignore
vendored
Normal file
1
scanner/.gitignore
vendored
Normal file
@ -0,0 +1 @@
|
||||
*.pyc
|
@ -1,90 +1,49 @@
|
||||
import os.path
|
||||
from datetime import datetime
|
||||
|
||||
|
||||
def message(category, text):
|
||||
if message.level <= 0:
|
||||
sep = " "
|
||||
else:
|
||||
sep = "--"
|
||||
print("%s %s%s[%s]%s%s" % (
|
||||
datetime.now().isoformat(),
|
||||
max(0, message.level) * " |",
|
||||
sep,
|
||||
category,
|
||||
max(1, (14 - len(category))) * " ",
|
||||
text))
|
||||
print "%s %s%s[%s]%s%s" % (datetime.now().isoformat(), max(0, message.level) * " |", sep, category, max(1, (14 - len(category))) * " ", text)
|
||||
message.level = -1
|
||||
|
||||
|
||||
def next_level():
|
||||
message.level += 1
|
||||
|
||||
|
||||
def back_level():
|
||||
message.level -= 1
|
||||
|
||||
|
||||
def set_cache_path_base(base):
|
||||
trim_base.base = base
|
||||
|
||||
|
||||
def untrim_base(path):
|
||||
return os.path.join(trim_base.base, path)
|
||||
|
||||
|
||||
def trim_base_custom(path, base):
|
||||
if path.startswith(base):
|
||||
path = path[len(base):]
|
||||
if path.startswith('/'):
|
||||
path = path[1:]
|
||||
return path
|
||||
|
||||
|
||||
def trim_base(path):
|
||||
return trim_base_custom(path, trim_base.base)
|
||||
|
||||
|
||||
def cache_base(path, filepath=False):
|
||||
if len(path) == 0:
|
||||
return "root"
|
||||
elif filepath and len(path.split(os.sep)) < 2:
|
||||
path = "root-" + path
|
||||
path = trim_base(path).replace(
|
||||
'/', '-').replace(
|
||||
' ', '_').replace(
|
||||
'(', '').replace(
|
||||
'&', '').replace(
|
||||
',', '').replace(
|
||||
')', '').replace(
|
||||
'#', '').replace(
|
||||
'[', '').replace(
|
||||
']', '').replace(
|
||||
'"', '').replace(
|
||||
"'", '').replace(
|
||||
'_-_', '-').lower()
|
||||
path = trim_base(path).replace('/', '-').replace(' ', '_').replace('(', '').replace('&', '').replace(',', '').replace(')', '').replace('#', '').replace('[', '').replace(']', '').replace('"', '').replace("'", '').replace('_-_', '-').lower()
|
||||
while path.find("--") != -1:
|
||||
path = path.replace("--", "-")
|
||||
while path.find("__") != -1:
|
||||
path = path.replace("__", "_")
|
||||
return path
|
||||
|
||||
|
||||
def json_cache(path):
|
||||
return cache_base(path) + ".json"
|
||||
|
||||
|
||||
def image_cache(path, size, square=False):
|
||||
if square:
|
||||
suffix = str(size) + "s"
|
||||
else:
|
||||
suffix = str(size)
|
||||
return cache_base(path, True) + "_" + suffix + ".jpg"
|
||||
|
||||
|
||||
def video_cache(path):
|
||||
return cache_base(path, True) + ".mp4"
|
||||
|
||||
|
||||
def file_mtime(path):
|
||||
return datetime.fromtimestamp(int(os.path.getmtime(path)))
|
||||
|
@ -10,7 +10,6 @@ import gc
|
||||
import tempfile
|
||||
from VideoToolWrapper import *
|
||||
|
||||
|
||||
def make_photo_thumbs(self, original_path, thumb_path, size):
|
||||
# The pool methods use a queue.Queue to pass tasks to the worker processes.
|
||||
# Everything that goes through the queue.Queue must be pickable, and since
|
||||
@ -18,7 +17,6 @@ def make_photo_thumbs(self, original_path, thumb_path, size):
|
||||
# This is why we have this "dummy" function, so that it's pickable.
|
||||
self._photo_thumbnail(original_path, thumb_path, size[0], size[1])
|
||||
|
||||
|
||||
class Album(object):
|
||||
def __init__(self, path):
|
||||
self._path = trim_base(path)
|
||||
@ -26,26 +24,20 @@ class Album(object):
|
||||
self._albums = list()
|
||||
self._photos_sorted = True
|
||||
self._albums_sorted = True
|
||||
|
||||
@property
|
||||
def photos(self):
|
||||
return self._photos
|
||||
|
||||
@property
|
||||
def albums(self):
|
||||
return self._albums
|
||||
|
||||
@property
|
||||
def path(self):
|
||||
return self._path
|
||||
|
||||
def __str__(self):
|
||||
return self.path
|
||||
|
||||
@property
|
||||
def cache_path(self):
|
||||
return json_cache(self.path)
|
||||
|
||||
@property
|
||||
def date(self):
|
||||
self._sort()
|
||||
@ -56,18 +48,14 @@ class Album(object):
|
||||
elif len(self._albums) == 0:
|
||||
return self._photos[-1].date
|
||||
return max(self._photos[-1].date, self._albums[-1].date)
|
||||
|
||||
def __cmp__(self, other):
|
||||
return cmp(self.date, other.date)
|
||||
|
||||
def add_photo(self, photo):
|
||||
self._photos.append(photo)
|
||||
self._photos_sorted = False
|
||||
|
||||
def add_album(self, album):
|
||||
self._albums.append(album)
|
||||
self._albums_sorted = False
|
||||
|
||||
def _sort(self):
|
||||
if not self._photos_sorted:
|
||||
self._photos.sort()
|
||||
@ -75,7 +63,6 @@ class Album(object):
|
||||
if not self._albums_sorted:
|
||||
self._albums.sort()
|
||||
self._albums_sorted = True
|
||||
|
||||
@property
|
||||
def empty(self):
|
||||
if len(self._photos) != 0:
|
||||
@ -92,14 +79,12 @@ class Album(object):
|
||||
fp = open(os.path.join(base_dir, self.cache_path), 'w')
|
||||
json.dump(self, fp, cls=PhotoAlbumEncoder)
|
||||
fp.close()
|
||||
|
||||
@staticmethod
|
||||
def from_cache(path):
|
||||
fp = open(path, "r")
|
||||
dictionary = json.load(fp)
|
||||
fp.close()
|
||||
return Album.from_dict(dictionary)
|
||||
|
||||
@staticmethod
|
||||
def from_dict(dictionary, cripple=True):
|
||||
album = Album(dictionary["path"])
|
||||
@ -110,39 +95,26 @@ class Album(object):
|
||||
album.add_album(Album.from_dict(subalbum), cripple)
|
||||
album._sort()
|
||||
return album
|
||||
|
||||
def to_dict(self, cripple=True):
|
||||
self._sort()
|
||||
subalbums = []
|
||||
if cripple:
|
||||
for sub in self._albums:
|
||||
if not sub.empty:
|
||||
subalbums.append({
|
||||
"path": trim_base_custom(sub.path, self._path),
|
||||
"date": sub.date
|
||||
})
|
||||
subalbums.append({ "path": trim_base_custom(sub.path, self._path), "date": sub.date })
|
||||
else:
|
||||
for sub in self._albums:
|
||||
if not sub.empty:
|
||||
subalbums.append(sub)
|
||||
return {
|
||||
"path": self.path,
|
||||
"date": self.date,
|
||||
"albums": subalbums,
|
||||
"photos": self._photos
|
||||
}
|
||||
|
||||
return { "path": self.path, "date": self.date, "albums": subalbums, "photos": self._photos }
|
||||
def photo_from_path(self, path):
|
||||
for photo in self._photos:
|
||||
if trim_base(path) == photo._path:
|
||||
return photo
|
||||
return None
|
||||
|
||||
|
||||
class Photo(object):
|
||||
thumb_sizes = [
|
||||
(75, True), (150, True), (640, False), (1024, False), (1600, False)]
|
||||
|
||||
thumb_sizes = [ (75, True), (150, True), (640, False), (1024, False), (1600, False) ]
|
||||
def __init__(self, path, thumb_path=None, attributes=None):
|
||||
self._path = trim_base(path)
|
||||
self.is_valid = True
|
||||
@ -193,17 +165,11 @@ class Photo(object):
|
||||
exif = {}
|
||||
for tag, value in info.items():
|
||||
decoded = TAGS.get(tag, tag)
|
||||
if ((isinstance(value, tuple) or isinstance(value, list)) and
|
||||
(isinstance(decoded, str) or
|
||||
isinstance(decoded, unicode)) and
|
||||
decoded.startswith("DateTime") and
|
||||
len(value) >= 1):
|
||||
if (isinstance(value, tuple) or isinstance(value, list)) and (isinstance(decoded, str) or isinstance(decoded, unicode)) and decoded.startswith("DateTime") and len(value) >= 1:
|
||||
value = value[0]
|
||||
if isinstance(value, str) or isinstance(value, unicode):
|
||||
value = value.strip().partition("\x00")[0]
|
||||
if ((isinstance(decoded, str) or
|
||||
isinstance(decoded, unicode)) and
|
||||
decoded.startswith("DateTime")):
|
||||
if (isinstance(decoded, str) or isinstance(decoded, unicode)) and decoded.startswith("DateTime"):
|
||||
try:
|
||||
value = datetime.strptime(value, '%Y:%m:%d %H:%M:%S')
|
||||
except KeyboardInterrupt:
|
||||
@ -212,18 +178,12 @@ class Photo(object):
|
||||
continue
|
||||
exif[decoded] = value
|
||||
|
||||
_pm = self._photo_metadata
|
||||
|
||||
if "Orientation" in exif:
|
||||
self._orientation = exif["Orientation"]
|
||||
self._orientation = exif["Orientation"];
|
||||
if self._orientation in range(5, 9):
|
||||
self._attributes["size"] = (
|
||||
self._attributes["size"][1], self._attributes["size"][0])
|
||||
if self._orientation - 1 < len(
|
||||
_pm.orientation_list):
|
||||
self._attributes["orientation"] = (
|
||||
_pm.orientation_list[
|
||||
self._orientation - 1])
|
||||
self._attributes["size"] = (self._attributes["size"][1], self._attributes["size"][0])
|
||||
if self._orientation - 1 < len(self._photo_metadata.orientation_list):
|
||||
self._attributes["orientation"] = self._photo_metadata.orientation_list[self._orientation - 1]
|
||||
if "Make" in exif:
|
||||
self._attributes["make"] = exif["Make"]
|
||||
if "Model" in exif:
|
||||
@ -242,182 +202,64 @@ class Photo(object):
|
||||
self._attributes["iso"] = exif["PhotographicSensitivity"]
|
||||
if "ExposureTime" in exif:
|
||||
self._attributes["exposureTime"] = exif["ExposureTime"]
|
||||
if exif.get("Flash") in _pm.flash_dictionary:
|
||||
if "Flash" in exif and exif["Flash"] in self._photo_metadata.flash_dictionary:
|
||||
try:
|
||||
self._attributes["flash"] = _pm.flash_dictionary[exif["Flash"]]
|
||||
self._attributes["flash"] = self._photo_metadata.flash_dictionary[exif["Flash"]]
|
||||
except KeyboardInterrupt:
|
||||
raise
|
||||
except:
|
||||
pass
|
||||
if exif.get("LightSource") in _pm.light_source_dictionary:
|
||||
if "LightSource" in exif and exif["LightSource"] in self._photo_metadata.light_source_dictionary:
|
||||
try:
|
||||
self._attributes["lightSource"] = _pm.light_source_dictionary[
|
||||
exif["LightSource"]]
|
||||
self._attributes["lightSource"] = self._photo_metadata.light_source_dictionary[exif["LightSource"]]
|
||||
except KeyboardInterrupt:
|
||||
raise
|
||||
except:
|
||||
pass
|
||||
if "ExposureProgram" in exif and exif["ExposureProgram"] < len(
|
||||
_pm.exposure_list):
|
||||
self._attributes["exposureProgram"] = _pm.exposure_list[
|
||||
exif["ExposureProgram"]]
|
||||
if "ExposureProgram" in exif and exif["ExposureProgram"] < len(self._photo_metadata.exposure_list):
|
||||
self._attributes["exposureProgram"] = self._photo_metadata.exposure_list[exif["ExposureProgram"]]
|
||||
if "SpectralSensitivity" in exif:
|
||||
self._attributes["spectralSensitivity"] = exif[
|
||||
"SpectralSensitivity"]
|
||||
if "MeteringMode" in exif and exif["MeteringMode"] < len(
|
||||
_pm.metering_list):
|
||||
self._attributes["meteringMode"] = _pm.metering_list[
|
||||
exif["MeteringMode"]]
|
||||
if "SensingMethod" in exif and exif["SensingMethod"] < len(
|
||||
_pm.sensing_method_list):
|
||||
self._attributes["sensingMethod"] = _pm.sensing_method_list[
|
||||
exif["SensingMethod"]]
|
||||
if "SceneCaptureType" in exif and exif["SceneCaptureType"] < len(
|
||||
_pm.scene_capture_type_list):
|
||||
self._attributes["sceneCaptureType"] = _pm.scene_capture_type_list[
|
||||
exif["SceneCaptureType"]]
|
||||
if "SubjectDistanceRange" in exif and exif[
|
||||
"SubjectDistanceRange"] < len(_pm.subject_distance_range_list):
|
||||
self._attributes[
|
||||
"subjectDistanceRange"] = _pm.subject_distance_range_list[
|
||||
exif["SubjectDistanceRange"]]
|
||||
self._attributes["spectralSensitivity"] = exif["SpectralSensitivity"]
|
||||
if "MeteringMode" in exif and exif["MeteringMode"] < len(self._photo_metadata.metering_list):
|
||||
self._attributes["meteringMode"] = self._photo_metadata.metering_list[exif["MeteringMode"]]
|
||||
if "SensingMethod" in exif and exif["SensingMethod"] < len(self._photo_metadata.sensing_method_list):
|
||||
self._attributes["sensingMethod"] = self._photo_metadata.sensing_method_list[exif["SensingMethod"]]
|
||||
if "SceneCaptureType" in exif and exif["SceneCaptureType"] < len(self._photo_metadata.scene_capture_type_list):
|
||||
self._attributes["sceneCaptureType"] = self._photo_metadata.scene_capture_type_list[exif["SceneCaptureType"]]
|
||||
if "SubjectDistanceRange" in exif and exif["SubjectDistanceRange"] < len(self._photo_metadata.subject_distance_range_list):
|
||||
self._attributes["subjectDistanceRange"] = self._photo_metadata.subject_distance_range_list[exif["SubjectDistanceRange"]]
|
||||
if "ExposureCompensation" in exif:
|
||||
self._attributes["exposureCompensation"] = exif[
|
||||
"ExposureCompensation"]
|
||||
self._attributes["exposureCompensation"] = exif["ExposureCompensation"]
|
||||
if "ExposureBiasValue" in exif:
|
||||
self._attributes["exposureCompensation"] = exif[
|
||||
"ExposureBiasValue"]
|
||||
self._attributes["exposureCompensation"] = exif["ExposureBiasValue"]
|
||||
if "DateTimeOriginal" in exif:
|
||||
try:
|
||||
self._attributes["dateTimeOriginal"] = datetime.strptime(
|
||||
exif["DateTimeOriginal"], '%Y:%m:%d %H:%M:%S')
|
||||
self._attributes["dateTimeOriginal"] = datetime.strptime(exif["DateTimeOriginal"], '%Y:%m:%d %H:%M:%S')
|
||||
except KeyboardInterrupt:
|
||||
raise
|
||||
except TypeError:
|
||||
self._attributes["dateTimeOriginal"] = exif["DateTimeOriginal"]
|
||||
if "DateTime" in exif:
|
||||
try:
|
||||
self._attributes["dateTime"] = datetime.strptime(
|
||||
exif["DateTime"], '%Y:%m:%d %H:%M:%S')
|
||||
self._attributes["dateTime"] = datetime.strptime(exif["DateTime"], '%Y:%m:%d %H:%M:%S')
|
||||
except KeyboardInterrupt:
|
||||
raise
|
||||
except TypeError:
|
||||
self._attributes["dateTime"] = exif["DateTime"]
|
||||
|
||||
_photo_metadata.flash_dictionary = {
|
||||
0x0: "No Flash",
|
||||
0x1: "Fired",
|
||||
0x5: "Fired, Return not detected",
|
||||
0x7: "Fired, Return detected",
|
||||
0x8: "On, Did not fire",
|
||||
0x9: "On, Fired",
|
||||
0xd: "On, Return not detected",
|
||||
0xf: "On, Return detected",
|
||||
0x10: "Off, Did not fire",
|
||||
0x14: "Off, Did not fire, Return not detected",
|
||||
0x18: "Auto, Did not fire",
|
||||
0x19: "Auto, Fired",
|
||||
0x1d: "Auto, Fired, Return not detected",
|
||||
0x1f: "Auto, Fired, Return detected",
|
||||
0x20: "No flash function",
|
||||
0x30: "Off, No flash function",
|
||||
0x41: "Fired, Red-eye reduction",
|
||||
0x45: "Fired, Red-eye reduction, Return not detected",
|
||||
0x47: "Fired, Red-eye reduction, Return detected",
|
||||
0x49: "On, Red-eye reduction",
|
||||
0x4d: "On, Red-eye reduction, Return not detected",
|
||||
0x4f: "On, Red-eye reduction, Return detected",
|
||||
0x50: "Off, Red-eye reduction",
|
||||
0x58: "Auto, Did not fire, Red-eye reduction",
|
||||
0x59: "Auto, Fired, Red-eye reduction",
|
||||
0x5d: "Auto, Fired, Red-eye reduction, Return not detected",
|
||||
0x5f: "Auto, Fired, Red-eye reduction, Return detected"
|
||||
}
|
||||
_photo_metadata.light_source_dictionary = {
|
||||
0: "Unknown",
|
||||
1: "Daylight",
|
||||
2: "Fluorescent",
|
||||
3: "Tungsten (incandescent light)",
|
||||
4: "Flash",
|
||||
9: "Fine weather",
|
||||
10: "Cloudy weather",
|
||||
11: "Shade",
|
||||
12: "Daylight fluorescent (D 5700 - 7100K)",
|
||||
13: "Day white fluorescent (N 4600 - 5400K)",
|
||||
14: "Cool white fluorescent (W 3900 - 4500K)",
|
||||
15: "White fluorescent (WW 3200 - 3700K)",
|
||||
17: "Standard light A",
|
||||
18: "Standard light B",
|
||||
19: "Standard light C",
|
||||
20: "D55",
|
||||
21: "D65",
|
||||
22: "D75",
|
||||
23: "D50",
|
||||
24: "ISO studio tungsten"
|
||||
}
|
||||
_photo_metadata.metering_list = [
|
||||
"Unknown",
|
||||
"Average",
|
||||
"Center-weighted average",
|
||||
"Spot",
|
||||
"Multi-spot",
|
||||
"Multi-segment",
|
||||
"Partial"
|
||||
]
|
||||
_photo_metadata.exposure_list = [
|
||||
"Not Defined",
|
||||
"Manual",
|
||||
"Program AE",
|
||||
"Aperture-priority AE",
|
||||
"Shutter speed priority AE",
|
||||
"Creative (Slow speed)",
|
||||
"Action (High speed)",
|
||||
"Portrait",
|
||||
"Landscape",
|
||||
"Bulb"
|
||||
]
|
||||
_photo_metadata.orientation_list = [
|
||||
"Horizontal (normal)",
|
||||
"Mirror horizontal",
|
||||
"Rotate 180",
|
||||
"Mirror vertical",
|
||||
"Mirror horizontal and rotate 270 CW",
|
||||
"Rotate 90 CW",
|
||||
"Mirror horizontal and rotate 90 CW",
|
||||
"Rotate 270 CW"
|
||||
]
|
||||
_photo_metadata.sensing_method_list = [
|
||||
"Not defined",
|
||||
"One-chip color area sensor",
|
||||
"Two-chip color area sensor",
|
||||
"Three-chip color area sensor",
|
||||
"Color sequential area sensor",
|
||||
"Trilinear sensor",
|
||||
"Color sequential linear sensor"
|
||||
]
|
||||
_photo_metadata.scene_capture_type_list = [
|
||||
"Standard",
|
||||
"Landscape",
|
||||
"Portrait",
|
||||
"Night scene"
|
||||
]
|
||||
_photo_metadata.subject_distance_range_list = [
|
||||
"Unknown",
|
||||
"Macro",
|
||||
"Close view",
|
||||
"Distant view"
|
||||
]
|
||||
_photo_metadata.flash_dictionary = {0x0: "No Flash", 0x1: "Fired",0x5: "Fired, Return not detected",0x7: "Fired, Return detected",0x8: "On, Did not fire",0x9: "On, Fired",0xd: "On, Return not detected",0xf: "On, Return detected",0x10: "Off, Did not fire",0x14: "Off, Did not fire, Return not detected",0x18: "Auto, Did not fire",0x19: "Auto, Fired",0x1d: "Auto, Fired, Return not detected",0x1f: "Auto, Fired, Return detected",0x20: "No flash function",0x30: "Off, No flash function",0x41: "Fired, Red-eye reduction",0x45: "Fired, Red-eye reduction, Return not detected",0x47: "Fired, Red-eye reduction, Return detected",0x49: "On, Red-eye reduction",0x4d: "On, Red-eye reduction, Return not detected",0x4f: "On, Red-eye reduction, Return detected",0x50: "Off, Red-eye reduction",0x58: "Auto, Did not fire, Red-eye reduction",0x59: "Auto, Fired, Red-eye reduction",0x5d: "Auto, Fired, Red-eye reduction, Return not detected",0x5f: "Auto, Fired, Red-eye reduction, Return detected"}
|
||||
_photo_metadata.light_source_dictionary = {0: "Unknown", 1: "Daylight", 2: "Fluorescent", 3: "Tungsten (incandescent light)", 4: "Flash", 9: "Fine weather", 10: "Cloudy weather", 11: "Shade", 12: "Daylight fluorescent (D 5700 - 7100K)", 13: "Day white fluorescent (N 4600 - 5400K)", 14: "Cool white fluorescent (W 3900 - 4500K)", 15: "White fluorescent (WW 3200 - 3700K)", 17: "Standard light A", 18: "Standard light B", 19: "Standard light C", 20: "D55", 21: "D65", 22: "D75", 23: "D50", 24: "ISO studio tungsten"}
|
||||
_photo_metadata.metering_list = ["Unknown", "Average", "Center-weighted average", "Spot", "Multi-spot", "Multi-segment", "Partial"]
|
||||
_photo_metadata.exposure_list = ["Not Defined", "Manual", "Program AE", "Aperture-priority AE", "Shutter speed priority AE", "Creative (Slow speed)", "Action (High speed)", "Portrait", "Landscape", "Bulb"]
|
||||
_photo_metadata.orientation_list = ["Horizontal (normal)", "Mirror horizontal", "Rotate 180", "Mirror vertical", "Mirror horizontal and rotate 270 CW", "Rotate 90 CW", "Mirror horizontal and rotate 90 CW", "Rotate 270 CW"]
|
||||
_photo_metadata.sensing_method_list = ["Not defined", "One-chip color area sensor", "Two-chip color area sensor", "Three-chip color area sensor", "Color sequential area sensor", "Trilinear sensor", "Color sequential linear sensor"]
|
||||
_photo_metadata.scene_capture_type_list = ["Standard", "Landscape", "Portrait", "Night scene"]
|
||||
_photo_metadata.subject_distance_range_list = ["Unknown", "Macro", "Close view", "Distant view"]
|
||||
|
||||
|
||||
def _video_metadata(self, path, original=True):
|
||||
p = VideoProbeWrapper().call(
|
||||
'-show_format',
|
||||
'-show_streams',
|
||||
'-of',
|
||||
'json',
|
||||
'-loglevel',
|
||||
'0',
|
||||
path)
|
||||
if p is False:
|
||||
p = VideoProbeWrapper().call('-show_format', '-show_streams', '-of', 'json', '-loglevel', '0', path)
|
||||
if p == False:
|
||||
self.is_valid = False
|
||||
return
|
||||
info = json.loads(p)
|
||||
@ -430,8 +272,7 @@ class Photo(object):
|
||||
if "tags" in s and "rotate" in s["tags"]:
|
||||
self._attributes["rotate"] = s["tags"]["rotate"]
|
||||
if original:
|
||||
self._attributes["originalSize"] = (
|
||||
int(s["width"]), int(s["height"]))
|
||||
self._attributes["originalSize"] = (int(s["width"]), int(s["height"]))
|
||||
# we break, because a video can contain several streams
|
||||
# this way we only get/use values from the first stream
|
||||
break
|
||||
@ -446,20 +287,11 @@ class Photo(object):
|
||||
# lets use this
|
||||
|
||||
try:
|
||||
self._attributes["dateTimeVideo"] = datetime.strptime(
|
||||
info['format']['tags']['creation_time'],
|
||||
'%Y-%m-%d %H:%M:%S')
|
||||
self._attributes["dateTimeVideo"] = datetime.strptime(info['format']['tags']['creation_time'], '%Y-%m-%d %H:%M:%S')
|
||||
except KeyboardInterrupt:
|
||||
raise
|
||||
except TypeError:
|
||||
pass
|
||||
except ValueError:
|
||||
try:
|
||||
self._attributes["dateTimeVideo"] = datetime.strptime(
|
||||
info['format']['tags']['creation_time'],
|
||||
'%Y-%m-%d %H:%M:%S.%fZ')
|
||||
except ValueError:
|
||||
pass
|
||||
|
||||
def _photo_thumbnail(self, original_path, thumb_path, size, square=False):
|
||||
try:
|
||||
@ -482,15 +314,13 @@ class Photo(object):
|
||||
mirror = image.transpose(Image.FLIP_TOP_BOTTOM)
|
||||
elif self._orientation == 5:
|
||||
# Horizontal Mirror + Rotation 270
|
||||
mirror = image.transpose(
|
||||
Image.FLIP_TOP_BOTTOM).transpose(Image.ROTATE_270)
|
||||
mirror = image.transpose(Image.FLIP_TOP_BOTTOM).transpose(Image.ROTATE_270)
|
||||
elif self._orientation == 6:
|
||||
# Rotation 270
|
||||
mirror = image.transpose(Image.ROTATE_270)
|
||||
elif self._orientation == 7:
|
||||
# Vertical Mirror + Rotation 270
|
||||
mirror = image.transpose(
|
||||
Image.FLIP_LEFT_RIGHT).transpose(Image.ROTATE_270)
|
||||
mirror = image.transpose(Image.FLIP_LEFT_RIGHT).transpose(Image.ROTATE_270)
|
||||
elif self._orientation == 8:
|
||||
# Rotation 90
|
||||
mirror = image.transpose(Image.ROTATE_90)
|
||||
@ -499,16 +329,12 @@ class Photo(object):
|
||||
self._thumbnail(image, original_path, thumb_path, size, square)
|
||||
|
||||
def _thumbnail(self, image, original_path, thumb_path, size, square):
|
||||
thumb_path = os.path.join(
|
||||
thumb_path, image_cache(self._path, size, square))
|
||||
info_string = "%s -> %spx" % (
|
||||
os.path.basename(original_path),
|
||||
str(size))
|
||||
thumb_path = os.path.join(thumb_path, image_cache(self._path, size, square))
|
||||
info_string = "%s -> %spx" % (os.path.basename(original_path), str(size))
|
||||
if square:
|
||||
info_string += ", square"
|
||||
message("thumbing", info_string)
|
||||
if os.path.exists(thumb_path) and file_mtime(
|
||||
thumb_path) >= self._attributes["dateTimeFile"]:
|
||||
if os.path.exists(thumb_path) and file_mtime(thumb_path) >= self._attributes["dateTimeFile"]:
|
||||
return
|
||||
gc.collect()
|
||||
try:
|
||||
@ -560,8 +386,7 @@ class Photo(object):
|
||||
|
||||
try:
|
||||
for size in Photo.thumb_sizes:
|
||||
pool.apply_async(make_photo_thumbs, args=(
|
||||
self, original_path, thumb_path, size))
|
||||
pool.apply_async(make_photo_thumbs, args = (self, original_path, thumb_path, size))
|
||||
except:
|
||||
pool.terminate()
|
||||
|
||||
@ -569,7 +394,7 @@ class Photo(object):
|
||||
pool.join()
|
||||
|
||||
def _video_thumbnails(self, thumb_path, original_path):
|
||||
(tfd, tfn) = tempfile.mkstemp()
|
||||
(tfd, tfn) = tempfile.mkstemp();
|
||||
p = VideoTranscodeWrapper().call(
|
||||
'-i', original_path, # original file to extract thumbs from
|
||||
'-f', 'image2', # extract image
|
||||
@ -580,10 +405,8 @@ class Photo(object):
|
||||
'-y', # don't prompt for overwrite
|
||||
tfn # temporary file to store extracted image
|
||||
)
|
||||
if p is False:
|
||||
message(
|
||||
"couldn't extract video frame",
|
||||
os.path.basename(original_path))
|
||||
if p == False:
|
||||
message("couldn't extract video frame", os.path.basename(original_path))
|
||||
try:
|
||||
os.unlink(tfn)
|
||||
except:
|
||||
@ -616,8 +439,7 @@ class Photo(object):
|
||||
mirror = image.transpose(Image.ROTATE_90)
|
||||
for size in Photo.thumb_sizes:
|
||||
if size[1]:
|
||||
self._thumbnail(
|
||||
mirror, original_path, thumb_path, size[0], size[1])
|
||||
self._thumbnail(mirror, original_path, thumb_path, size[0], size[1])
|
||||
try:
|
||||
os.unlink(tfn)
|
||||
except:
|
||||
@ -630,9 +452,7 @@ class Photo(object):
|
||||
transcode_cmd = [
|
||||
'-i', original_path, # original file to be encoded
|
||||
'-c:v', 'libx264', # set h264 as videocodec
|
||||
# set specific preset that provides a certain encoding speed to
|
||||
# compression ratio
|
||||
'-preset', 'slow',
|
||||
'-preset', 'slow', # set specific preset that provides a certain encoding speed to compression ratio
|
||||
'-profile:v', 'baseline', # set output to specific h264 profile
|
||||
'-level', '3.0', # sets highest compatibility with target devices
|
||||
'-crf', '20', # set quality
|
||||
@ -641,8 +461,7 @@ class Photo(object):
|
||||
'-c:a', 'aac', # set aac as audiocodec
|
||||
'-ac', '2', # force two audiochannels
|
||||
'-ab', '160k', # set audiobitrate to 160Kbps
|
||||
# limits max rate, will degrade CRF if needed
|
||||
'-maxrate', '10000000',
|
||||
'-maxrate', '10000000', # limits max rate, will degrade CRF if needed
|
||||
'-bufsize', '10000000', # define how much the client should buffer
|
||||
'-f', 'mp4', # fileformat mp4
|
||||
'-threads', str(num_of_cores), # number of cores (all minus one)
|
||||
@ -652,9 +471,7 @@ class Photo(object):
|
||||
filters = []
|
||||
info_string = "%s -> mp4, h264" % (os.path.basename(original_path))
|
||||
message("transcoding", info_string)
|
||||
if (os.path.exists(transcode_path) and
|
||||
file_mtime(
|
||||
transcode_path) >= self._attributes["dateTimeFile"]):
|
||||
if os.path.exists(transcode_path) and file_mtime(transcode_path) >= self._attributes["dateTimeFile"]:
|
||||
self._video_metadata(transcode_path, False)
|
||||
return
|
||||
if "originalSize" in self._attributes:
|
||||
@ -684,19 +501,17 @@ class Photo(object):
|
||||
tmp_transcode_cmd = transcode_cmd[:]
|
||||
transcode_cmd.append(transcode_path)
|
||||
p = VideoTranscodeWrapper().call(*transcode_cmd)
|
||||
if p is False:
|
||||
if p == False:
|
||||
# add another option, try transcoding again
|
||||
# done to avoid this error;
|
||||
# x264 [error]: baseline profile doesn't support 4:2:2
|
||||
message(
|
||||
"transcoding failure, trying yuv420p",
|
||||
os.path.basename(original_path))
|
||||
message("transcoding failure, trying yuv420p", os.path.basename(original_path))
|
||||
tmp_transcode_cmd.append('-pix_fmt')
|
||||
tmp_transcode_cmd.append('yuv420p')
|
||||
tmp_transcode_cmd.append(transcode_path)
|
||||
p = VideoTranscodeWrapper().call(*tmp_transcode_cmd)
|
||||
|
||||
if p is False:
|
||||
if p == False:
|
||||
message("transcoding failure", os.path.basename(original_path))
|
||||
try:
|
||||
os.unlink(transcode_path)
|
||||
@ -709,33 +524,25 @@ class Photo(object):
|
||||
@property
|
||||
def name(self):
|
||||
return os.path.basename(self._path)
|
||||
|
||||
def __str__(self):
|
||||
return self.name
|
||||
|
||||
@property
|
||||
def path(self):
|
||||
return self._path
|
||||
|
||||
@property
|
||||
def image_caches(self):
|
||||
caches = []
|
||||
if ("mediaType" in self._attributes and
|
||||
self._attributes["mediaType"] == "video"):
|
||||
if "mediaType" in self._attributes and self._attributes["mediaType"] == "video":
|
||||
for size in Photo.thumb_sizes:
|
||||
if size[1]:
|
||||
caches.append(image_cache(self._path, size[0], size[1]))
|
||||
caches.append(video_cache(self._path))
|
||||
else:
|
||||
caches = [
|
||||
image_cache(self._path, size[0], size[1])
|
||||
for size in Photo.thumb_sizes
|
||||
]
|
||||
caches = [image_cache(self._path, size[0], size[1]) for size in Photo.thumb_sizes]
|
||||
return caches
|
||||
|
||||
@property
|
||||
def date(self):
|
||||
correct_date = None
|
||||
correct_date = None;
|
||||
if not self.is_valid:
|
||||
correct_date = datetime(1900, 1, 1)
|
||||
if "dateTimeVideo" in self._attributes:
|
||||
@ -758,7 +565,6 @@ class Photo(object):
|
||||
@property
|
||||
def attributes(self):
|
||||
return self._attributes
|
||||
|
||||
@staticmethod
|
||||
def from_dict(dictionary, basepath):
|
||||
del dictionary["date"]
|
||||
@ -767,21 +573,17 @@ class Photo(object):
|
||||
for key, value in dictionary.items():
|
||||
if key.startswith("dateTime"):
|
||||
try:
|
||||
dictionary[key] = datetime.strptime(
|
||||
dictionary[key],
|
||||
"%a %b %d %H:%M:%S %Y")
|
||||
dictionary[key] = datetime.strptime(dictionary[key], "%a %b %d %H:%M:%S %Y")
|
||||
except KeyboardInterrupt:
|
||||
raise
|
||||
except:
|
||||
pass
|
||||
return Photo(path, None, dictionary)
|
||||
|
||||
def to_dict(self):
|
||||
photo = {"name": self.name, "date": self.date}
|
||||
photo = { "name": self.name, "date": self.date }
|
||||
photo.update(self.attributes)
|
||||
return photo
|
||||
|
||||
|
||||
class PhotoAlbumEncoder(json.JSONEncoder):
|
||||
def default(self, obj):
|
||||
if isinstance(obj, datetime):
|
||||
@ -789,3 +591,4 @@ class PhotoAlbumEncoder(json.JSONEncoder):
|
||||
if isinstance(obj, Album) or isinstance(obj, Photo):
|
||||
return obj.to_dict()
|
||||
return json.JSONEncoder.default(self, obj)
|
||||
|
||||
|
@ -6,13 +6,10 @@ from PhotoAlbum import Photo, Album, PhotoAlbumEncoder
|
||||
from CachePath import *
|
||||
import json
|
||||
|
||||
|
||||
class TreeWalker:
|
||||
def __init__(self, album_path, cache_path):
|
||||
self.album_path = os.path.abspath(
|
||||
album_path).decode(sys.getfilesystemencoding())
|
||||
self.cache_path = os.path.abspath(
|
||||
cache_path).decode(sys.getfilesystemencoding())
|
||||
self.album_path = os.path.abspath(album_path).decode(sys.getfilesystemencoding())
|
||||
self.cache_path = os.path.abspath(cache_path).decode(sys.getfilesystemencoding())
|
||||
set_cache_path_base(self.album_path)
|
||||
self.all_albums = list()
|
||||
self.all_photos = list()
|
||||
@ -20,7 +17,6 @@ class TreeWalker:
|
||||
self.big_lists()
|
||||
self.remove_stale()
|
||||
message("complete", "")
|
||||
|
||||
def walk(self, path):
|
||||
next_level()
|
||||
if not os.access(path, os.R_OK | os.X_OK):
|
||||
@ -58,8 +54,7 @@ class TreeWalker:
|
||||
raise
|
||||
except:
|
||||
next_level()
|
||||
message("unicode error", entry.decode(
|
||||
sys.getfilesystemencoding(), "replace"))
|
||||
message("unicode error", entry.decode(sys.getfilesystemencoding(), "replace"))
|
||||
back_level()
|
||||
continue
|
||||
entry = os.path.join(path, entry)
|
||||
@ -72,24 +67,18 @@ class TreeWalker:
|
||||
cache_hit = False
|
||||
if cached_album:
|
||||
cached_photo = cached_album.photo_from_path(entry)
|
||||
if (cached_photo and file_mtime(
|
||||
entry) <= cached_photo.attributes["dateTimeFile"]):
|
||||
if cached_photo and file_mtime(entry) <= cached_photo.attributes["dateTimeFile"]:
|
||||
cache_file = None
|
||||
if "mediaType" in cached_photo.attributes:
|
||||
if cached_photo.attributes["mediaType"] == "video":
|
||||
# if video
|
||||
cache_file = os.path.join(
|
||||
self.cache_path, video_cache(entry))
|
||||
cache_file = os.path.join(self.cache_path, video_cache(entry))
|
||||
else:
|
||||
# if image
|
||||
cache_file = os.path.join(
|
||||
self.cache_path,
|
||||
image_cache(entry, 1024, False))
|
||||
cache_file = os.path.join(self.cache_path, image_cache(entry, 1024, False))
|
||||
else:
|
||||
# if image
|
||||
cache_file = os.path.join(
|
||||
self.cache_path,
|
||||
image_cache(entry, 1024, False))
|
||||
cache_file = os.path.join(self.cache_path, image_cache(entry, 1024, False))
|
||||
|
||||
# at this point we have full path to cache image/video
|
||||
# check if it actually exists
|
||||
@ -115,7 +104,6 @@ class TreeWalker:
|
||||
message("empty", os.path.basename(path))
|
||||
back_level()
|
||||
return album
|
||||
|
||||
def big_lists(self):
|
||||
photo_list = []
|
||||
self.all_photos.sort()
|
||||
@ -125,11 +113,9 @@ class TreeWalker:
|
||||
fp = open(os.path.join(self.cache_path, "all_photos.json"), 'w')
|
||||
json.dump(photo_list, fp, cls=PhotoAlbumEncoder)
|
||||
fp.close()
|
||||
|
||||
def remove_stale(self):
|
||||
message("cleanup", "building stale list")
|
||||
all_cache_entries = {"all_photos.json": True,
|
||||
"latest_photos.json": True}
|
||||
all_cache_entries = { "all_photos.json": True, "latest_photos.json": True }
|
||||
for album in self.all_albums:
|
||||
all_cache_entries[album.cache_path] = True
|
||||
for photo in self.all_photos:
|
||||
|
@ -2,7 +2,6 @@ from CachePath import message
|
||||
import os
|
||||
import subprocess
|
||||
|
||||
|
||||
class VideoToolWrapper(object):
|
||||
def call(self, *args):
|
||||
path = args[-1]
|
||||
@ -35,14 +34,12 @@ class VideoToolWrapper(object):
|
||||
except:
|
||||
pass
|
||||
|
||||
|
||||
class VideoTranscodeWrapper(VideoToolWrapper):
|
||||
def __init__(self):
|
||||
self.wrappers = ['avconv', 'ffmpeg']
|
||||
self.check_output = False
|
||||
self.cleanup = True
|
||||
|
||||
|
||||
class VideoProbeWrapper(VideoToolWrapper):
|
||||
def __init__(self):
|
||||
self.wrappers = ['avprobe', 'ffprobe']
|
||||
|
@ -1,176 +1,10 @@
|
||||
import os
|
||||
from TreeWalker import TreeWalker
|
||||
from functools import wraps
|
||||
from mimetypes import guess_type
|
||||
from random import shuffle
|
||||
|
||||
from flask import Flask, Response, abort, json, jsonify, request
|
||||
from flask_login import current_user, login_user, logout_user
|
||||
|
||||
from .process import send_process
|
||||
from .jsonp import jsonp
|
||||
from .login import admin_user, load_user
|
||||
from .login import login_manager
|
||||
from flask import Flask
|
||||
from flask_login import LoginManager
|
||||
import os.path
|
||||
|
||||
app = Flask(__name__)
|
||||
app.config.from_pyfile(
|
||||
os.path.join(os.path.dirname(os.path.abspath(__file__)), "app.cfg"))
|
||||
|
||||
app.config.from_pyfile(os.path.join(os.path.dirname(os.path.abspath(__file__)), "app.cfg"))
|
||||
login_manager = LoginManager()
|
||||
import login
|
||||
login_manager.setup_app(app)
|
||||
|
||||
cwd = os.path.dirname(os.path.abspath(__file__))
|
||||
permission_map = app.config.get('PERMISSION_MAP', [])
|
||||
|
||||
|
||||
def accel_redirect(internal, real, relative_name):
|
||||
real_path = os.path.join(real, relative_name)
|
||||
internal_path = os.path.join(internal, relative_name)
|
||||
if not os.path.isfile(real_path):
|
||||
abort(404)
|
||||
mimetype = None
|
||||
types = guess_type(real_path)
|
||||
if len(types) != 0:
|
||||
mimetype = types[0]
|
||||
response = Response(mimetype=mimetype)
|
||||
response.headers.add("X-Accel-Redirect", internal_path)
|
||||
response.cache_control.public = True
|
||||
if mimetype == "application/json":
|
||||
response.cache_control.max_age = 3600
|
||||
else:
|
||||
response.cache_control.max_age = 29030400
|
||||
return response
|
||||
|
||||
|
||||
def cache_base(path):
|
||||
path = path.replace(
|
||||
'/', '-').replace(
|
||||
' ', '_').replace(
|
||||
'(', '').replace(
|
||||
'&', '').replace(
|
||||
',', '').replace(
|
||||
')', '').replace(
|
||||
'#', '').replace(
|
||||
'[', '').replace(
|
||||
']', '').replace(
|
||||
'"', '').replace(
|
||||
"'", '').replace(
|
||||
'_-_', '-').lower()
|
||||
|
||||
while path.find("--") != -1:
|
||||
path = path.replace("--", "-")
|
||||
|
||||
while path.find("__") != -1:
|
||||
path = path.replace("__", "_")
|
||||
|
||||
if len(path) == 0:
|
||||
path = "root"
|
||||
|
||||
return path
|
||||
|
||||
|
||||
def has_permission(path):
|
||||
if not current_user.is_anonymous and current_user.is_admin:
|
||||
return True
|
||||
|
||||
for auth_path in permission_map.keys():
|
||||
# this is a protected object
|
||||
if (path.startswith(auth_path) or
|
||||
path.startswith(cache_base(auth_path))):
|
||||
if current_user.is_anonymous:
|
||||
return False
|
||||
if current_user.id in permission_map.get(auth_path, []):
|
||||
return True
|
||||
else:
|
||||
return False
|
||||
return True
|
||||
|
||||
|
||||
def admin_required(fn):
|
||||
@wraps(fn)
|
||||
def decorated_view(*args, **kwargs):
|
||||
if (query_is_admin_user(request.args) or
|
||||
(current_user.is_authenticated and current_user.admin)):
|
||||
return fn(*args, **kwargs)
|
||||
return app.login_manager.unauthorized()
|
||||
return decorated_view
|
||||
|
||||
|
||||
def query_is_admin_user(query):
|
||||
username = query.get("username", None)
|
||||
password = query.get("password", None)
|
||||
return (username == app.config["ADMIN_USERNAME"] and
|
||||
password == app.config["ADMIN_PASSWORD"])
|
||||
|
||||
|
||||
@app.route("/scan")
|
||||
@admin_required
|
||||
def scan_photos():
|
||||
global cwd
|
||||
response = send_process([
|
||||
"stdbuf",
|
||||
"-oL",
|
||||
os.path.abspath(os.path.join(cwd, "../main.py")),
|
||||
os.path.abspath(app.config["ALBUM_PATH"]),
|
||||
os.path.abspath(app.config["CACHE_PATH"])
|
||||
],
|
||||
os.path.join(cwd, "scanner.pid"))
|
||||
response.headers.add("X-Accel-Buffering", "no")
|
||||
response.cache_control.no_cache = True
|
||||
return response
|
||||
|
||||
|
||||
@app.route("/auth")
|
||||
def login():
|
||||
if 'logout' in request.args:
|
||||
logout_user()
|
||||
|
||||
if current_user.is_authenticated:
|
||||
logout_user()
|
||||
|
||||
if (query_is_admin_user(request.form) or
|
||||
query_is_admin_user(request.args)):
|
||||
login_user(admin_user, remember=True)
|
||||
else:
|
||||
user_id = (request.form.get('username') or
|
||||
request.args.get('username', None))
|
||||
if user_id:
|
||||
login_user(load_user(user_id), remember=True)
|
||||
return 'You are now logged in.'
|
||||
|
||||
return ""
|
||||
|
||||
|
||||
@app.route("/albums/<path:path>")
|
||||
def albums(path):
|
||||
if not has_permission(path):
|
||||
abort(403)
|
||||
|
||||
return accel_redirect(
|
||||
app.config["ALBUM_ACCEL"], app.config["ALBUM_PATH"], path)
|
||||
|
||||
|
||||
@app.route("/cache/<path:path>")
|
||||
def cache(path):
|
||||
if not has_permission(path):
|
||||
abort(403)
|
||||
|
||||
return accel_redirect(
|
||||
app.config["CACHE_ACCEL"], app.config["CACHE_PATH"], path)
|
||||
|
||||
|
||||
@app.route("/photos")
|
||||
@jsonp
|
||||
def photos():
|
||||
f = open(os.path.join(app.config["CACHE_PATH"], "all_photos.json"), "r")
|
||||
photos = json.load(f)
|
||||
f.close()
|
||||
photos = [photo for photo in photos if has_permission(photo)]
|
||||
count = int(request.args.get("count", len(photos)))
|
||||
random = request.args.get("random") == "true"
|
||||
if random:
|
||||
shuffle(photos)
|
||||
else:
|
||||
photos.reverse()
|
||||
response = jsonify(photos=photos[0:count])
|
||||
response.cache_control.no_cache = True
|
||||
return response
|
||||
import endpoints
|
||||
|
@ -1,9 +1,8 @@
|
||||
ADMIN_USERNAME = "misterscanner"
|
||||
ADMIN_PASSWORD = "ilovescanning"
|
||||
|
||||
PERMISSION_MAP = {
|
||||
'album/or/image/path': ['token']
|
||||
}
|
||||
PHOTO_USERNAME = "photos" # The GUI currently hardcodes 'photos', so don't change this
|
||||
PHOTO_PASSWORD = "myphotopassword"
|
||||
|
||||
ALBUM_PATH = "/var/www/uwsgi/photofloat/albums"
|
||||
ALBUM_ACCEL = "/internal-albums"
|
1
scanner/floatapp/auth.txt
Normal file
1
scanner/floatapp/auth.txt
Normal file
@ -0,0 +1 @@
|
||||
path/to/some/place
|
118
scanner/floatapp/endpoints.py
Normal file
118
scanner/floatapp/endpoints.py
Normal file
@ -0,0 +1,118 @@
|
||||
from floatapp import app
|
||||
from floatapp.login import admin_required, login_required, is_authenticated, query_is_photo_user, query_is_admin_user, photo_user, admin_user
|
||||
from floatapp.jsonp import jsonp
|
||||
from process import send_process
|
||||
from TreeWalker import TreeWalker
|
||||
from flask import Response, abort, json, request, jsonify
|
||||
from flask_login import login_user, current_user
|
||||
from random import shuffle
|
||||
import os
|
||||
from mimetypes import guess_type
|
||||
|
||||
cwd = os.path.dirname(os.path.abspath(__file__))
|
||||
|
||||
@app.route("/scan")
|
||||
@admin_required
|
||||
def scan_photos():
|
||||
global cwd
|
||||
response = send_process([ "stdbuf", "-oL", os.path.abspath(os.path.join(cwd, "../main.py")),
|
||||
os.path.abspath(app.config["ALBUM_PATH"]), os.path.abspath(app.config["CACHE_PATH"]) ],
|
||||
os.path.join(cwd, "scanner.pid"))
|
||||
response.headers.add("X-Accel-Buffering", "no")
|
||||
response.cache_control.no_cache = True
|
||||
return response
|
||||
|
||||
@app.route("/auth")
|
||||
def login():
|
||||
success = False
|
||||
if current_user.is_authenticated():
|
||||
success = True
|
||||
elif query_is_photo_user(request.form) or query_is_photo_user(request.args):
|
||||
success = login_user(photo_user, remember=True)
|
||||
elif query_is_admin_user(request.form) or query_is_admin_user(request.args):
|
||||
success = login_user(admin_user, remember=True)
|
||||
if not success:
|
||||
abort(403)
|
||||
return ""
|
||||
|
||||
def cache_base(path):
|
||||
path = path.replace('/', '-').replace(' ', '_').replace('(', '').replace('&', '').replace(',', '').replace(')', '').replace('#', '').replace('[', '').replace(']', '').replace('"', '').replace("'", '').replace('_-_', '-').lower()
|
||||
while path.find("--") != -1:
|
||||
path = path.replace("--", "-")
|
||||
while path.find("__") != -1:
|
||||
path = path.replace("__", "_")
|
||||
if len(path) == 0:
|
||||
path = "root"
|
||||
return path
|
||||
|
||||
auth_list = [ ]
|
||||
def read_auth_list():
|
||||
global auth_list, cwd
|
||||
f = open(os.path.join(cwd, "auth.txt"), "r")
|
||||
paths = [ ]
|
||||
for path in f:
|
||||
path = path.strip()
|
||||
paths.append(path)
|
||||
paths.append(cache_base(path))
|
||||
f.close()
|
||||
auth_list = paths
|
||||
|
||||
# TODO: Make this run via inotify
|
||||
read_auth_list()
|
||||
|
||||
def check_permissions(path):
|
||||
if not is_authenticated():
|
||||
for auth_path in auth_list:
|
||||
if path.startswith(auth_path):
|
||||
abort(403)
|
||||
|
||||
@app.route("/albums/<path:path>")
|
||||
def albums(path):
|
||||
check_permissions(path)
|
||||
return accel_redirect(app.config["ALBUM_ACCEL"], app.config["ALBUM_PATH"], path)
|
||||
|
||||
@app.route("/cache/<path:path>")
|
||||
def cache(path):
|
||||
check_permissions(path)
|
||||
return accel_redirect(app.config["CACHE_ACCEL"], app.config["CACHE_PATH"], path)
|
||||
|
||||
def accel_redirect(internal, real, relative_name):
|
||||
real_path = os.path.join(real, relative_name)
|
||||
internal_path = os.path.join(internal, relative_name)
|
||||
if not os.path.isfile(real_path):
|
||||
abort(404)
|
||||
mimetype = None
|
||||
types = guess_type(real_path)
|
||||
if len(types) != 0:
|
||||
mimetype = types[0]
|
||||
response = Response(mimetype=mimetype)
|
||||
response.headers.add("X-Accel-Redirect", internal_path)
|
||||
response.cache_control.public = True
|
||||
if mimetype == "application/json":
|
||||
response.cache_control.max_age = 3600
|
||||
else:
|
||||
response.cache_control.max_age = 29030400
|
||||
return response
|
||||
|
||||
@app.route("/photos")
|
||||
@jsonp
|
||||
def photos():
|
||||
f = open(os.path.join(app.config["CACHE_PATH"], "all_photos.json"), "r")
|
||||
photos = json.load(f)
|
||||
f.close()
|
||||
if not is_authenticated():
|
||||
def allowed(photo):
|
||||
for auth_path in auth_list:
|
||||
if photo.startswith(auth_path):
|
||||
return False
|
||||
return True
|
||||
photos = [photo for photo in photos if allowed(photo)]
|
||||
count = int(request.args.get("count", len(photos)))
|
||||
random = request.args.get("random") == "true"
|
||||
if random:
|
||||
shuffle(photos)
|
||||
else:
|
||||
photos.reverse()
|
||||
response = jsonify(photos=photos[0:count])
|
||||
response.cache_control.no_cache = True
|
||||
return response
|
@ -5,16 +5,14 @@ import re
|
||||
|
||||
jsonp_validator = re.compile("^[a-zA-Z0-9_\-.]{1,128}$")
|
||||
|
||||
|
||||
def jsonp(f):
|
||||
"""Wraps JSONified output for JSONP"""
|
||||
@wraps(f)
|
||||
def decorated_function(*args, **kwargs):
|
||||
callback = request.args.get('callback', False)
|
||||
if callback and jsonp_validator.match(callback):
|
||||
content = str(callback) + '(' + str(f(*args, **kwargs).data) + ')'
|
||||
return current_app.response_class(
|
||||
content, mimetype='application/javascript')
|
||||
content = str(callback) + '(' + str(f(*args,**kwargs).data) + ')'
|
||||
return current_app.response_class(content, mimetype='application/javascript')
|
||||
else:
|
||||
return f(*args, **kwargs)
|
||||
return decorated_function
|
||||
|
@ -1,35 +1,53 @@
|
||||
from flask import abort
|
||||
from flask_login import UserMixin, LoginManager
|
||||
|
||||
login_manager = LoginManager()
|
||||
|
||||
from floatapp import app, login_manager
|
||||
from flask import request, abort
|
||||
from flask_login import current_user, UserMixin
|
||||
from functools import wraps
|
||||
|
||||
class User(UserMixin):
|
||||
def __init__(self, id, admin=False):
|
||||
self.admin = admin
|
||||
self.id = id
|
||||
|
||||
def __unicode__(self):
|
||||
return u"{}".format(self.id)
|
||||
|
||||
def __str__(self):
|
||||
return str(self.id)
|
||||
|
||||
@property
|
||||
def is_admin(self):
|
||||
return self.admin
|
||||
|
||||
|
||||
photo_user = User("user")
|
||||
admin_user = User("admin", True)
|
||||
|
||||
|
||||
@login_manager.user_loader
|
||||
def load_user(id):
|
||||
if id == "admin":
|
||||
if id == "user":
|
||||
return photo_user
|
||||
elif id == "admin":
|
||||
return admin_user
|
||||
return User(id)
|
||||
|
||||
return None
|
||||
|
||||
@login_manager.unauthorized_handler
|
||||
def unauthorized():
|
||||
return abort(403)
|
||||
|
||||
def login_required(fn):
|
||||
@wraps(fn)
|
||||
def decorated_view(*args, **kwargs):
|
||||
if query_is_admin_user(request.args) or query_is_photo_user(request.args) or current_user.is_authenticated():
|
||||
return fn(*args, **kwargs)
|
||||
return app.login_manager.unauthorized()
|
||||
return decorated_view
|
||||
|
||||
def admin_required(fn):
|
||||
@wraps(fn)
|
||||
def decorated_view(*args, **kwargs):
|
||||
if query_is_admin_user(request.args) or (current_user.is_authenticated() and current_user.admin):
|
||||
return fn(*args, **kwargs)
|
||||
return app.login_manager.unauthorized()
|
||||
return decorated_view
|
||||
|
||||
def query_is_photo_user(query):
|
||||
username = query.get("username", None)
|
||||
password = query.get("password", None)
|
||||
return username == app.config["PHOTO_USERNAME"] and password == app.config["PHOTO_PASSWORD"]
|
||||
|
||||
def query_is_admin_user(query):
|
||||
username = query.get("username", None)
|
||||
password = query.get("password", None)
|
||||
return username == app.config["ADMIN_USERNAME"] and password == app.config["ADMIN_PASSWORD"]
|
||||
|
||||
def is_authenticated():
|
||||
return query_is_admin_user(request.args) or query_is_photo_user(request.args) or current_user.is_authenticated()
|
||||
|
@ -1,13 +1,12 @@
|
||||
from flask import Response
|
||||
import subprocess
|
||||
import os
|
||||
|
||||
import sys
|
||||
|
||||
class ProcessWrapper(object):
|
||||
def __init__(self, process, done):
|
||||
self.process = process
|
||||
self.done = done
|
||||
|
||||
def close(self):
|
||||
self.done()
|
||||
if self.process.returncode is not None:
|
||||
@ -15,13 +14,10 @@ class ProcessWrapper(object):
|
||||
self.process.stdout.close()
|
||||
self.process.terminate()
|
||||
self.process.wait()
|
||||
|
||||
def __iter__(self):
|
||||
return self
|
||||
|
||||
def __del__(self):
|
||||
self.close()
|
||||
|
||||
def next(self):
|
||||
try:
|
||||
data = self.process.stdout.readline()
|
||||
@ -32,8 +28,6 @@ class ProcessWrapper(object):
|
||||
return data
|
||||
self.close()
|
||||
raise StopIteration()
|
||||
__next__ = next
|
||||
|
||||
|
||||
def send_process(args, pid_file):
|
||||
def setup_proc():
|
||||
@ -42,22 +36,17 @@ def send_process(args, pid_file):
|
||||
f.close()
|
||||
os.close(0)
|
||||
os.dup2(1, 2)
|
||||
|
||||
def tear_down_proc():
|
||||
try:
|
||||
os.unlink(pid_file)
|
||||
except:
|
||||
pass
|
||||
|
||||
if os.path.exists(pid_file):
|
||||
f = open(pid_file, "r")
|
||||
pid = f.read()
|
||||
f.close()
|
||||
if os.path.exists("/proc/%s/status" % pid):
|
||||
return Response(
|
||||
"Scanner is already running.\n", mimetype="text/plain")
|
||||
|
||||
process = subprocess.Popen(
|
||||
args, close_fds=True, stdout=subprocess.PIPE, preexec_fn=setup_proc)
|
||||
return Response("Scanner is already running.\n", mimetype="text/plain")
|
||||
process = subprocess.Popen(args, close_fds=True, stdout=subprocess.PIPE, preexec_fn=setup_proc)
|
||||
response = ProcessWrapper(process, tear_down_proc)
|
||||
return Response(response, direct_passthrough=True, mimetype="text/plain")
|
||||
|
@ -1,21 +1,19 @@
|
||||
#!./venv/bin/python
|
||||
#!/usr/bin/env python2
|
||||
|
||||
from TreeWalker import TreeWalker
|
||||
from CachePath import message
|
||||
import sys
|
||||
import os
|
||||
|
||||
|
||||
def main():
|
||||
reload(sys)
|
||||
sys.setdefaultencoding("UTF-8")
|
||||
|
||||
if len(sys.argv) != 3:
|
||||
print("usage: %s ALBUM_PATH CACHE_PATH" % sys.argv[0])
|
||||
print "usage: %s ALBUM_PATH CACHE_PATH" % sys.argv[0]
|
||||
return
|
||||
|
||||
try:
|
||||
os.umask(0o22)
|
||||
os.umask(022)
|
||||
TreeWalker(sys.argv[1], sys.argv[2])
|
||||
except KeyboardInterrupt:
|
||||
message("keyboard", "CTRL+C pressed, quitting.")
|
||||
|
@ -1,3 +0,0 @@
|
||||
flask>=0.11
|
||||
flask-login>=0.4.1
|
||||
pillow>=5.3.0
|
19
web/.htaccess
Executable file
19
web/.htaccess
Executable file
@ -0,0 +1,19 @@
|
||||
AddOutputFilterByType DEFLATE text/text text/html text/plain text/xml text/css application/x-javascript application/javascript application/json
|
||||
|
||||
<FilesMatch "\.(jpg|otf|ico)$">
|
||||
Header set Cache-Control "max-age=29030400, public"
|
||||
</FilesMatch>
|
||||
<FilesMatch "\.(css|js)$">
|
||||
Header set Cache-Control "max-age=5184000, public"
|
||||
</FilesMatch>
|
||||
<FilesMatch "index.html">
|
||||
Header set Cache-Control "max-age=2678400, public"
|
||||
</FilesMatch>
|
||||
<FilesMatch "\.json$">
|
||||
Header set Cache-Control "max-age=3600, public"
|
||||
</FilesMatch>
|
||||
|
||||
<FilesMatch "Makefile">
|
||||
deny from all
|
||||
</FilesMatch>
|
||||
|
69
web/Makefile
69
web/Makefile
@ -1,69 +0,0 @@
|
||||
JS_DIR = js
|
||||
CSS_DIR = css
|
||||
|
||||
JS_MIN = $(JS_DIR)/scripts.min.js
|
||||
CSS_MIN = $(CSS_DIR)/styles.min.css
|
||||
|
||||
JS_MIN_FILES := $(sort $(patsubst %.js, %.min.js, $(filter-out %.min.js, $(wildcard $(JS_DIR)/*.js))))
|
||||
CSS_MIN_FILES := $(sort $(patsubst %.css, %.min.css, $(filter-out %.min.css, $(wildcard $(CSS_DIR)/*.css))))
|
||||
|
||||
JS_COMPILER := java -jar bin/closure-compiler.jar --warning_level QUIET
|
||||
CSS_COMPILER := java -jar bin/yui-compressor.jar --type css
|
||||
|
||||
DEBUG ?= 0
|
||||
|
||||
.PHONY: all deploy clean
|
||||
|
||||
all: $(JS_MIN) $(CSS_MIN)
|
||||
|
||||
ifeq ($(DEBUG),0)
|
||||
%.min.js: %.js
|
||||
@echo " JS " $@
|
||||
@$(JS_COMPILER) --js $< --js_output_file $@
|
||||
else
|
||||
%.min.js: %.js
|
||||
@echo " JS " $@
|
||||
@cat $< > $@
|
||||
endif
|
||||
|
||||
%.min.css: %.css
|
||||
@echo " CSS " $@
|
||||
@$(CSS_COMPILER) -o $@ $<
|
||||
|
||||
$(JS_MIN): $(JS_MIN_FILES)
|
||||
@echo " CAT " $@
|
||||
@cat $^ > $@
|
||||
|
||||
$(CSS_MIN): $(CSS_MIN_FILES)
|
||||
@echo " CAT " $@
|
||||
@cat $^ > $@
|
||||
|
||||
clean:
|
||||
@echo " RM " $(JS_MIN) $(JS_MIN_FILES) $(CSS_MIN) $(CSS_MIN_FILES)
|
||||
@rm -fv $(JS_MIN) $(JS_MIN_FILES) $(CSS_MIN) $(CSS_MIN_FILES)
|
||||
|
||||
include ../deployment-config.mk
|
||||
|
||||
SSH_OPTS := -q -o ControlMaster=auto -o ControlPath=.ssh-deployment.sock
|
||||
|
||||
deploy: all
|
||||
@echo " SSH $(WEB_SERVER)"
|
||||
@ssh $(SSH_OPTS) -Nf $(WEB_SERVER)
|
||||
|
||||
@echo " RSYNC . $(WEB_SERVER):$(HTDOCS_PATH)"
|
||||
@ssh -t $(SSH_OPTS) $(WEB_SERVER) "sudo -u $(HTDOCS_USER) -v"
|
||||
@rsync -aizm --delete-excluded --exclude=.ssh-deployment.sock --exclude=Makefile --exclude=*.swp \
|
||||
--exclude=bin/ --include=scripts.min.js --include=styles.min.css \
|
||||
--exclude=*.js --exclude=*.css --rsh="ssh $(SSH_OPTS)" \
|
||||
--rsync-path="sudo -n -u $(HTDOCS_USER) rsync" \
|
||||
. "$(WEB_SERVER):$(HTDOCS_PATH)"
|
||||
|
||||
@echo " CHOWN $(HTDOCS_USER):$(HTDOCS_USER) $(WEB_SERVER):$(HTDOCS_PATH)"
|
||||
@ssh -t $(SSH_OPTS) $(WEB_SERVER) "sudo chown -R $(HTDOCS_USER):$(HTDOCS_USER) '$(HTDOCS_PATH)'"
|
||||
|
||||
@echo " CHMOD 750/640 $(WEB_SERVER):$(HTDOCS_PATH)"
|
||||
@ssh -t $(SSH_OPTS) $(WEB_SERVER) "sudo find '$(HTDOCS_PATH)' -type f -exec chmod 640 {} \;; \
|
||||
sudo find '$(HTDOCS_PATH)' -type d -exec chmod 750 {} \;;"
|
||||
|
||||
@echo " SSH $(WEB_SERVER)"
|
||||
@ssh -O exit $(SSH_OPTS) $(WEB_SERVER)
|
Binary file not shown.
Binary file not shown.
1
web/css/.gitignore
vendored
Normal file
1
web/css/.gitignore
vendored
Normal file
@ -0,0 +1 @@
|
||||
*.min.css
|
3
web/css/.htaccess
Executable file
3
web/css/.htaccess
Executable file
@ -0,0 +1,3 @@
|
||||
<FilesMatch "(?<!min)\.css">
|
||||
deny from all
|
||||
</FilesMatch>
|
@ -40,7 +40,7 @@ a:hover {
|
||||
padding: 0;
|
||||
}
|
||||
.current-thumb {
|
||||
border-top: 4px solid #FFAD27 !important;
|
||||
border-top: 1px solid #FFAD27 !important;
|
||||
}
|
||||
#subalbums {
|
||||
padding-top: 1.5em;
|
||||
@ -63,7 +63,7 @@ a:hover {
|
||||
#next, #back {
|
||||
position: absolute;
|
||||
width: auto;
|
||||
font-size: 12em;
|
||||
font-size: 4.5em;
|
||||
line-height: 0;
|
||||
top: 40%;
|
||||
font-weight: bold;
|
||||
@ -86,6 +86,7 @@ a:hover {
|
||||
bottom: 150px;
|
||||
top: 2.5em;
|
||||
overflow: hidden;
|
||||
margin-bottom: 0.5em;
|
||||
left: 0;
|
||||
right: 0;
|
||||
text-align: center;
|
||||
@ -101,9 +102,9 @@ a:hover {
|
||||
#photo-links {
|
||||
background-color: #000000;
|
||||
font-weight: bold;
|
||||
height: 12px;
|
||||
font-size: 12px;
|
||||
line-height: 10px;
|
||||
height: 10px;
|
||||
font-size: 10px;
|
||||
line-height: 7px;
|
||||
padding-top: 3px;
|
||||
padding-bottom: 3px;
|
||||
padding-right: 10px;
|
||||
@ -166,7 +167,6 @@ a:hover {
|
||||
white-space: nowrap;
|
||||
padding: 0 !important;
|
||||
text-align: center;
|
||||
background-color: #999999;
|
||||
}
|
||||
|
||||
#powered-by {
|
||||
|
12
web/css/css-minify.sh
Executable file
12
web/css/css-minify.sh
Executable file
@ -0,0 +1,12 @@
|
||||
#!/bin/bash
|
||||
|
||||
# minify all .css-files
|
||||
ls -1 *.css|grep -Ev "min.css$" | while read cssfile; do
|
||||
newfile="${cssfile%.*}.min.css"
|
||||
echo "$cssfile --> $newfile"
|
||||
curl -X POST -s --data-urlencode "input@$cssfile" http://cssminifier.com/raw > $newfile
|
||||
done
|
||||
|
||||
# merge all into one single file
|
||||
rm -f styles.min.css
|
||||
cat *.min.css > styles.min.css
|
Binary file not shown.
Before Width: | Height: | Size: 250 KiB |
@ -6,18 +6,16 @@
|
||||
<meta name="medium" content="image" />
|
||||
<title>Photos</title>
|
||||
<link href="css/styles.min.css" rel="stylesheet" type="text/css" />
|
||||
<script src="js/scripts.min.js"></script>
|
||||
<script type="text/javascript" src="js/scripts.min.js"></script>
|
||||
</head>
|
||||
<body>
|
||||
<div id="title">Photos</div>
|
||||
<div id="photo-view">
|
||||
<div id="title">Photos</div>
|
||||
<div id="photo-view">
|
||||
<div id="photo-box">
|
||||
<a id="next-photo">
|
||||
<img id="photo" alt="Dynamic image, no alt available" src="./img/image-placeholder.png" />
|
||||
</a>
|
||||
<a id="next-photo"><img id="photo" /></a>
|
||||
<div id="photo-bar">
|
||||
<div id="photo-links">
|
||||
<a id="metadata-link" href="javascript:void(0)">show metadata</a> | <a id="original-link">download original</a><span id="fullscreen-divider"> | </span><a id="fullscreen" href="javascript:void(0)">fullscreen</a>
|
||||
<a id="metadata-link" href="javascript:void(0)">show metadata</a> | <a id="original-link" target="_blank">download original</a><span id="fullscreen-divider"> | </span><a id="fullscreen" href="javascript:void(0)">fullscreen</a>
|
||||
</div>
|
||||
<div id="metadata"></div>
|
||||
</div>
|
||||
@ -26,25 +24,21 @@
|
||||
<div id="video-box-inner">
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<a id="back">‹</a>
|
||||
<a id="next">›</a>
|
||||
</div>
|
||||
|
||||
<div id="album-view">
|
||||
</div>
|
||||
<div id="album-view">
|
||||
<div id="thumbs">
|
||||
<div id="loading">Loading...</div>
|
||||
</div>
|
||||
<div id="subalbums"></div>
|
||||
<div id="powered-by">Powered by <a href="https://derdritte.net/gitea/markus/photofloat">subPhotoFloat</a></div>
|
||||
</div>
|
||||
<div id="powered-by">Powered by <a href="http://www.zx2c4.com/projects/photofloat/" target="_blank">PhotoFloat</a></div>
|
||||
</div>
|
||||
|
||||
<div id="error-overlay"></div>
|
||||
<div id="error-text">Forgot my camera.</div>
|
||||
<div id="auth-text"><form id="auth-form"><input id="password" type="password" /><input type="submit" value="Login" /></form</div>
|
||||
|
||||
<div id="error-overlay"></div>
|
||||
<div id="error-text">Forgot my camera.</div>
|
||||
<div id="auth-text">
|
||||
<form id="auth-form">
|
||||
<input id="password" type="password" />
|
||||
<input type="submit" value="Login" />
|
||||
</form>
|
||||
</div>
|
||||
</body>
|
||||
</html>
|
||||
|
1
web/js/.gitignore
vendored
Normal file
1
web/js/.gitignore
vendored
Normal file
@ -0,0 +1 @@
|
||||
*.min.js
|
3
web/js/.htaccess
Executable file
3
web/js/.htaccess
Executable file
@ -0,0 +1,3 @@
|
||||
<FilesMatch "(?<!min)\.js">
|
||||
deny from all
|
||||
</FilesMatch>
|
@ -92,7 +92,7 @@
|
||||
$.ajax({
|
||||
type: "GET",
|
||||
dataType: "text",
|
||||
url: "auth?username=" + password,
|
||||
url: "auth?username=photos&password=" + password,
|
||||
success: function() {
|
||||
result(true);
|
||||
},
|
||||
|
@ -164,9 +164,9 @@ $(document).ready(function() {
|
||||
$(window).bind("resize", scaleImage);
|
||||
container = $("#photo-view");
|
||||
if (image.css("width") !== "100%" && container.height() * image.attr("ratio") > container.width())
|
||||
image.css("width", "100%").css("height", "auto");
|
||||
image.css("width", "100%").css("height", "auto").css("position", "absolute").css("bottom", 0);
|
||||
else if (image.css("height") !== "100%")
|
||||
image.css("height", "100%").css("width", "auto");
|
||||
image.css("height", "100%").css("width", "auto").css("position", "").css("bottom", "");
|
||||
}
|
||||
function scaleVideo() {
|
||||
var video, container;
|
||||
@ -244,7 +244,7 @@ $(document).ready(function() {
|
||||
$("#next-photo").attr("href", nextLink);
|
||||
$("#next").attr("href", nextLink);
|
||||
$("#back").attr("href", "#!/" + photoFloat.photoHash(currentAlbum, previousPhoto));
|
||||
$("#original-link").attr("href", photoFloat.originalPhotoPath(currentAlbum, currentPhoto));
|
||||
$("#original-link").attr("target", "_blank").attr("href", photoFloat.originalPhotoPath(currentAlbum, currentPhoto));
|
||||
|
||||
text = "<table>";
|
||||
if (typeof currentPhoto.make !== "undefined") text += "<tr><td>Camera Maker</td><td>" + currentPhoto.make + "</td></tr>";
|
||||
@ -318,7 +318,6 @@ $(document).ready(function() {
|
||||
photoFloat.parseHash(location.hash, hashParsed, die);
|
||||
});
|
||||
$(window).hashchange();
|
||||
/* Keyboard: Left / Right */
|
||||
$(document).keydown(function(e){
|
||||
if (currentPhoto === null)
|
||||
return true;
|
||||
@ -331,7 +330,6 @@ $(document).ready(function() {
|
||||
}
|
||||
return true;
|
||||
});
|
||||
/* Mousewheel */
|
||||
$(document).mousewheel(function(event, delta) {
|
||||
if (currentPhoto === null)
|
||||
return true;
|
||||
@ -344,16 +342,6 @@ $(document).ready(function() {
|
||||
}
|
||||
return true;
|
||||
});
|
||||
|
||||
/* Swipe */
|
||||
xwiper = new Xwiper('#photo-view');
|
||||
xwiper.onSwipeLeft(function(event){
|
||||
window.location.href = $("#next").attr("href");
|
||||
});
|
||||
xwiper.onSwipeRight(function(event){
|
||||
window.location.href = $("#back").attr("href");
|
||||
});
|
||||
|
||||
$("#photo-box").mouseenter(function() {
|
||||
$("#photo-links").stop().fadeTo("slow", 0.50).css("display", "inline");
|
||||
});
|
@ -1,135 +0,0 @@
|
||||
/*
|
||||
Xwiper
|
||||
|
||||
Provided by https://github.com/uxitten/xwiper/
|
||||
*/
|
||||
|
||||
'use strict';
|
||||
|
||||
var _createClass = function () { function defineProperties(target, props) { for (var i = 0; i < props.length; i++) { var descriptor = props[i]; descriptor.enumerable = descriptor.enumerable || false; descriptor.configurable = true; if ("value" in descriptor) descriptor.writable = true; Object.defineProperty(target, descriptor.key, descriptor); } } return function (Constructor, protoProps, staticProps) { if (protoProps) defineProperties(Constructor.prototype, protoProps); if (staticProps) defineProperties(Constructor, staticProps); return Constructor; }; }();
|
||||
|
||||
function _classCallCheck(instance, Constructor) { if (!(instance instanceof Constructor)) { throw new TypeError("Cannot call a class as a function"); } }
|
||||
|
||||
var Xwiper = function () {
|
||||
function Xwiper(element) {
|
||||
_classCallCheck(this, Xwiper);
|
||||
|
||||
this.element = null;
|
||||
this.touchStartX = 0;
|
||||
this.touchStartY = 0;
|
||||
this.touchEndX = 0;
|
||||
this.touchEndY = 0;
|
||||
this.sensitive = 50;
|
||||
this.onSwipeLeftAgent = null;
|
||||
this.onSwipeRightAgent = null;
|
||||
this.onSwipeUpAgent = null;
|
||||
this.onSwipeDownAgent = null;
|
||||
this.onTapAgent = null;
|
||||
|
||||
this.onTouchStart = this.onTouchStart.bind(this);
|
||||
this.onTouchEnd = this.onTouchEnd.bind(this);
|
||||
this.onSwipeLeft = this.onSwipeLeft.bind(this);
|
||||
this.onSwipeRight = this.onSwipeRight.bind(this);
|
||||
this.onSwipeUp = this.onSwipeUp.bind(this);
|
||||
this.onSwipeDown = this.onSwipeDown.bind(this);
|
||||
this.onTap = this.onTap.bind(this);
|
||||
this.destroy = this.destroy.bind(this);
|
||||
this.handleGesture = this.handleGesture.bind(this);
|
||||
|
||||
this.element = document.querySelector(element);
|
||||
this.element.addEventListener('touchstart', this.onTouchStart, false);
|
||||
|
||||
this.element.addEventListener('touchend', this.onTouchEnd, false);
|
||||
}
|
||||
|
||||
_createClass(Xwiper, [{
|
||||
key: 'onTouchStart',
|
||||
value: function onTouchStart(event) {
|
||||
this.touchStartX = event.changedTouches[0].screenX;
|
||||
this.touchStartY = event.changedTouches[0].screenY;
|
||||
}
|
||||
}, {
|
||||
key: 'onTouchEnd',
|
||||
value: function onTouchEnd(event) {
|
||||
this.touchEndX = event.changedTouches[0].screenX;
|
||||
this.touchEndY = event.changedTouches[0].screenY;
|
||||
this.handleGesture();
|
||||
}
|
||||
}, {
|
||||
key: 'onSwipeLeft',
|
||||
value: function onSwipeLeft(func) {
|
||||
this.onSwipeLeftAgent = func;
|
||||
}
|
||||
}, {
|
||||
key: 'onSwipeRight',
|
||||
value: function onSwipeRight(func) {
|
||||
this.onSwipeRightAgent = func;
|
||||
}
|
||||
}, {
|
||||
key: 'onSwipeUp',
|
||||
value: function onSwipeUp(func) {
|
||||
this.onSwipeUpAgent = func;
|
||||
}
|
||||
}, {
|
||||
key: 'onSwipeDown',
|
||||
value: function onSwipeDown(func) {
|
||||
this.onSwipeDownAgent = func;
|
||||
}
|
||||
}, {
|
||||
key: 'onTap',
|
||||
value: function onTap(func) {
|
||||
this.onTapAgent = func;
|
||||
}
|
||||
}, {
|
||||
key: 'destroy',
|
||||
value: function destroy() {
|
||||
this.element.removeEventListener('touchstart', this.onTouchStart);
|
||||
this.element.removeEventListener('touchend', this.onTouchEnd);
|
||||
}
|
||||
}, {
|
||||
key: 'handleGesture',
|
||||
value: function handleGesture() {
|
||||
/**
|
||||
* swiped left
|
||||
*/
|
||||
if (this.touchEndX + this.sensitive <= this.touchStartX) {
|
||||
this.onSwipeLeftAgent && this.onSwipeLeftAgent();
|
||||
return 'swiped left';
|
||||
}
|
||||
|
||||
/**
|
||||
* swiped right
|
||||
*/
|
||||
if (this.touchEndX - this.sensitive >= this.touchStartX) {
|
||||
this.onSwipeRightAgent && this.onSwipeRightAgent();
|
||||
return 'swiped right';
|
||||
}
|
||||
|
||||
/**
|
||||
* swiped up
|
||||
*/
|
||||
if (this.touchEndY + this.sensitive <= this.touchStartY) {
|
||||
this.onSwipeUpAgent && this.onSwipeUpAgent();
|
||||
return 'swiped up';
|
||||
}
|
||||
|
||||
/**
|
||||
* swiped down
|
||||
*/
|
||||
if (this.touchEndY - this.sensitive >= this.touchStartY) {
|
||||
this.onSwipeDownAgent && this.onSwipeDownAgent();
|
||||
return 'swiped down';
|
||||
}
|
||||
|
||||
/**
|
||||
* tap
|
||||
*/
|
||||
if (this.touchEndY === this.touchStartY) {
|
||||
this.onTapAgent && this.onTapAgent();
|
||||
return 'tap';
|
||||
}
|
||||
}
|
||||
}]);
|
||||
|
||||
return Xwiper;
|
||||
}();
|
12
web/js/js-minify.sh
Executable file
12
web/js/js-minify.sh
Executable file
@ -0,0 +1,12 @@
|
||||
#!/bin/bash
|
||||
|
||||
# minify all .js-files
|
||||
ls -1 *.js|grep -Ev "min.js$" | while read jsfile; do
|
||||
newfile="${jsfile%.*}.min.js"
|
||||
echo "$jsfile --> $newfile"
|
||||
curl -X POST -s --data-urlencode "input@$jsfile" http://javascript-minifier.com/raw > $newfile
|
||||
done
|
||||
|
||||
# merge all into one single file
|
||||
rm -f scripts.min.js
|
||||
cat *.min.js > scripts.min.js
|
Loading…
Reference in New Issue
Block a user