Thursday, December 31, 2015

From dot product to matrix multiplication

From dot product to matrix multiplication is only a small step now:

CREATE OR REPLACE FUNCTION public._matmul(
    m1 anyarray,
    m2 anyarray,
    r anyarray)
  RETURNS anyarray AS
$BODY$
DECLARE
ubm1 integer;
ubm2 integer;
BEGIN
ubm1 := array_upper(m1,1);
ubm2 := array_upper(m2,1);

FOR i IN 1..ubm1 LOOP
FOR j in 1..ubm1 loop
r[i][j] := m1[i:i][1:ubm2] +* m2[1:ubm2][j:j];
END LOOP;
END LOOP;

return r;
END;
$BODY$
  LANGUAGE plpgsql IMMUTABLE STRICT;

Since I didn't find a way to either create an array of type anyarray, or determine the type of the input array elements, a correctly sized result array has to be created outside and is passed as an anyarray argument into the function:

select _matmul('{{1.0,2.0,3.0},{4.0,5.0,6.0}}'::double precision[], '{{7.0,8.0},{9.0,10.0},{11.0,12.0}}'::double precision[], '{{null,null},{null,null}}');

{{58,64},{139,154}}

11 ms

This is only a proof of concept. It likely has problems but works good enough to run the example. I especially don't like the two nested loops and the result element access by index, maybe there is a better solution...

Tuesday, December 29, 2015

The arraymath extension vs. plpgsql

After arraymath was fixed, I did some simple benchmarks against my plpgsql code from the previous posts.

All tests were done with PostgreSQL 9.5rc1 on a Fujitsu Celsius H730 (Core i7 4700MQ, 16 GB RAM, 500 GB SSD), 1 million random elements per array.

Here are the results.

Array * Array, plpgsql:

select array(select random() from generate_series(1,1000000)) * array(select random() from generate_series(1,1000000));

1208 ms

Array * Array, arraymath:

select array(select random() from generate_series(1,1000000)) @* array(select random() from generate_series(1,1000000));

935 ms

Array * Scalar, plpgsql:

select array(select random() from generate_series(1,1000000)) * 2.0::double precision;

784 ms

Array * Scalar, arraymath:

select array(select random() from generate_series(1,1000000)) @* 2.0::double precision;

743 ms

So, arraymath is 33% faster for Array * Array and 6% faster for Array * Scalar. C wins over plpgsql, but not as dominating as I initially thought. The hard work is indeed the array traversal - and arraymath has to do it too.

And it's probably no good idea to access arrays by index in plpgsql if you intend to scan the whole array anyway.

More FORTRAN 90 like vector operations in PostgreSQL

It is also possible to do element wise operations of a vector and a scalar:

CREATE OR REPLACE FUNCTION public._vectormuls(
    v1 anyarray,
    v2 anyelement)
  RETURNS anyarray AS
$BODY$
DECLARE
BEGIN
if array_ndims(v1) != 1 Then
RAISE EXCEPTION 'Only vectors supported';
end if;

return array(select v.a * v.b from (select unnest(v1) as a, v2 as b) v);
END;
$BODY$
  LANGUAGE plpgsql IMMUTABLE STRICT;

CREATE OPERATOR public.*(
  PROCEDURE = _vectormuls,
  LEFTARG = anyarray,
  RIGHTARG = anyelement);

The runtime of the vector/scalar function is about 60% of the vector/vector variant, so the effort for unnest() clearly dominates the runtime of such functions.

And since the dot product of two vectors is the sum of the partial products, it is as simple as this:

CREATE OR REPLACE FUNCTION public._vectordotprod(
    v1 anyarray,
    v2 anyarray)
  RETURNS anyelement AS
$BODY$
DECLARE
BEGIN
if array_ndims(v1) != 1 OR array_ndims(v2) != 1 Then
RAISE EXCEPTION 'Only vectors supported';
end if;

if array_length(v1,1) != array_length(v2,1)  Then
RAISE EXCEPTION 'Vector size mismatch';
end if;

-- return SUM v.a + v.b from (select unnest(v1) as a, unnest(v2) as b) v; --9.4
-- return SUM(v.a * v.b) from (select unnest(v1) as a, unnest(v2) as b) v; --9.4
return SUM(a*b) from unnest(v1,v2) v (a,b); -- 9.5
END;
$BODY$
  LANGUAGE plpgsql IMMUTABLE STRICT;

CREATE OPERATOR public.+*(
  PROCEDURE = _vectordotprod,
  LEFTARG = anyarray,
  RIGHTARG = anyarray,
  COMMUTATOR = +*);

The operator name should be . or *+, but since PostgreSQL won't allow that, +* was the next best thing that worked...

BTW: I tried the same with sql instead of plpgsql as function language and the timings are the same. So the additional sanity checks and the language overhead effectively costs nothing compared to the unrolling of the vector(s).

Thursday, December 24, 2015

FORTRAN 90 like vector operations in PostgreSQL revisited

I tried to compare the arraymath extension suggested in a comment with my plpgsql solution from the last post.

PostgreSQL was 9.5rc1 on Ubuntu 14.04 LTS.

make
make install
create extension arraymath;

No problems so far, but...

select array(select random() from generate_series(1,1000000)) @+ array(select random() from generate_series(1,1000000));

NOTICE:  Hello from ArrayMath 1.0
********** Error **********

Server crash!

Whereas the plpgsql code does the addition of two one million element vectors in 1.2 seconds.

Interestingly this works:

SELECT ARRAY[1,2,3,4,5,6,7,8] @+ ARRAY[1,2,3,4,5,6,7,8];

While this crashes:

SELECT ARRAY[1,2,3,4,5,6,7,8,9] @+ ARRAY[1,2,3,4,5,6,7,8,9];

So arraymath seems to fail with vectors larger than 8 elements. Probably only on 9.5, but I couldn't get it to work properly.


Sunday, December 20, 2015

FORTRAN 90 like vector operations in PostgreSQL

Let's say you have two vectors A and B and want to do element wise operations, creating a new vector C, FORTRAN 90 (and many other languages now) has a concise notation for that. E.g.

C = A + B

does the element wise addition of A and B into C.

Can we have that in PostgreSQL too?

CREATE OR REPLACE FUNCTION _vectoradd(
    v1 anyarray,
    v2 anyarray)
  RETURNS anyarray AS
$BODY$
DECLARE
BEGIN
if array_ndims(v1) != 1 OR array_ndims(v2) != 1 Then
RAISE EXCEPTION 'Only vectors supported';
end if;

if array_length(v1,1) != array_length(v2,1)  Then
RAISE EXCEPTION 'Vector size mismatch';
end if;

-- return array(select v.a + v.b from (select unnest(v1) as a, unnest(v2) as b) v); --9.4
return array(select a+b from unnest(v1,v2) v (a,b)); -- 9.5
END;
$BODY$
  LANGUAGE plpgsql immutable strict;

CREATE OPERATOR + (
    LEFTARG = anyarray,
    RIGHTARG = anyarray,
    PROCEDURE = _vectoradd,
    COMMUTATOR = +);

select ARRAY[1,2,3,4] + ARRAY[1,2,3,4];

{2,4,6,8}

Yes, we can! :-)

Friday, December 11, 2015

The MongoDB BI analytics connector ... revisited

Woah, comments! :-)

Instead of individually answering them, I'll try to write a follow up.

In the meantime I've read the MongoDB BI connector setup guide. First it doesn't mention that Python is required, so the real thing could be written in C. Makes some sense to develop the concept with Python / Multicorn first and if it works go to C for performance. RDKit is developed that way.

Second I assumed that it would pull in data from MongoDB into PostgreSQL for performance reasons. Apparently it doesn't. From the look of it, PostgreSQL is just the runtime for the FDW, routing queries directly to MongoDB.

This attenuates my initial concern a bit, that from a managerial view, a setup where you have to administer MongoDB and PostgreSQL and develop against MongoDB and PostgreSQL will soon raise questions if one database wouldn't do.

And with Tableau, explicitly mentioned in the installation document, the winner wouldn't be MongoDB.

I've worked with Tableau and it is totally geared towards the relational model. To makes things worse for anything not SQL, it relies on ODBC, which even more limits the queries and data types it understands.
If accessing PostgreSQL's advanced features from Tableau is difficult, direct MongoDB would be hard. So this happened instead...

Still, if there is no business reason why it must be MongoDB and you can't go with PostgreSQL jsonb for example, justifying to continue with MongoDB once the users have licked NewSQL (aaah, those buzzwords) blood could be difficult, especially with the BI folks who just love having a rich analysis toolbox right in the database. So it's a slippery route for MongoDB as a company, to introduce their users to a viable competitor product.

And for PostgreSQL not being a distributed database: It isn't out of the box, but it can be. If and what will work for you, as always, depends on the use case:

BDR
pg_shard
PostgreSQL FDW
PL/Proxy
pgpool-II
pmpp

Pick your poison...

(PostgreSQL even can do map/reduce, if you want it to. Mind's the limit.)

Wednesday, December 9, 2015

The MongoDB BI analytics connector ... PostgreSQL FDW

Now this gets interesting. Apparently MongoDB will get a BI connector - which seems to be a Multicorn foreign data wrapper for PostgreSQL!

While the step is logical in some way given MongoDBs limited built-in analytical capabilities vs. what PostgreSQL can do by declaration, e.g. CTEs, window functions or TABLESAMPLE, this also could backfire badly. Well, I'm almost convinced it will backfire.

PostgreSQL already has 'NoSQL' capabilities like native JSON and HSTORE k/v, there is ToroDB, emulating a wire protocol compatible MongoDB on top of PostgreSQL. There is already work on Views (Slide 34) in ToroDB, which will enable Users to query documents stored in ToroDB not only with the MongoDB query language but also with SQL, thus seamlessly integrating ToroDB document data with plain PostgreSQL relational data.

Then, there is no reason to use MongoDB at all, except maybe data ingestion speed. Data ingestion in ToroDB is way slower than with a 'real' MongoDB, but this is being worked on.

And from my experience in a current project, with a bit of anticipatory thinking, PostgreSQL data ingestion speed can at least challenge MongoDB, with security, integrity, transactions and all - on a server with 1/4 the CPU cores than the Mongo-Server has.

So, the wolf and the lamb will feed together, and the lion will eat straw like the ox... - there are truly interesting times ahead. :-)

Sunday, November 1, 2015

Back from PostgreSQL Conference Europe 2015

A very good conference - again.

All the speakers knew their stuff very well, all talks I attended were 100% free of bullshit.

There is a lot going on to add some very advanced features to PostgreSQL, like GPGPU accelerated  queries (a project backed by NEC). FDWs are getting better and better with each release and heaven knows what ingenious applications people will find for logical decoding.

Also nice to see was that PostgreSQL runs so well on POWER Servers (and Raspberry Pi clusters :-)).

This rapid pace will put some strain on the update schedules of PostgreSQL users I guess, and it may take a while before the optimal solutions to some problems, especially parallel query execution and the new index types crystallize - but while PostgreSQL already is the world's most advanced open source database, it could be soon the world's most advanced database.

In terms of data integration capabilities and extensibility - it might be already...

Friday, October 23, 2015

Thursday, September 17, 2015

pg_strom - The road ahead!

OK, since my last post, strange things have happened.

I reinstalled CUDA from a binary package instead of directly over the Internet from the Nvidia repository. According to the Version numbers, they should be identical.

And?

SET pg_strom.enabled = ON;

EXPLAIN ANALYZE SELECT count(*)
FROM t_test
WHERE sqrt(x) > 0

GROUP BY y;

HashAggregate  (cost=177230.88..177230.89 rows=1 width=101) (actual time=2694.635..2694.635 rows=1 loops=1)
 Group Key: y
  ->  Custom Scan (GpuPreAgg)  (cost=13929.24..173681.39 rows=260 width=408) (actual time=235.256..2694.053 rows=76 loops=1)
        Bulkload: On (density: 100.00%)
       Reduction: Local + Global
       Device Filter: (sqrt((x)::double precision) > '0'::double precision)
       ->  Custom Scan (BulkScan) on t_test  (cost=9929.24..168897.54 rows=5000000 width=101) (actual time=3.946..599.690 rows=5000000 loops=1)
Planning time: 0.110 ms
Execution time: 2749.609 ms

pg_strom now works as expected! So it was a driver issue.

It's still 7 % slower on my machine than CPU, but that is explained by the weak GPU.

Monday, September 14, 2015

pg_strom - The rough road ahead

Inspired by those articles on pg_strom, the CUDA based GPGPU accelerator for PostgreSQL, I decided to repeat the tests.

First a caveat. I managed to destroy my Ubuntu 14.04 LTS beyond recoverability with the native Nvidia drivers and had to reinstall from scratch.

At least if you run a dual GPU setup like on my mobile workstation (Intel HD4600 and Nvidia Quadro K1100M) install the Nvidia CUDA package from the Nvidia repository on a fresh Ubuntu installation - and if it works, don't touch again.

If it works, it works good, you can even switch between Intel and Nvidia graphics without rebooting.

OK, here is the output of deviceQuery:

Device 0: "Quadro K1100M"
  CUDA Driver Version / Runtime Version          7.5 / 7.5
  CUDA Capability Major/Minor version number:    3.0
  Total amount of global memory:                 2048 MBytes (2147352576 bytes)
  ( 2) Multiprocessors, (192) CUDA Cores/MP:     384 CUDA Cores
  GPU Max Clock rate:                            706 MHz (0.71 GHz)
  Memory Clock rate:                             1400 Mhz
  Memory Bus Width:                              128-bit
  L2 Cache Size:                                 262144 bytes
  Maximum Texture Dimension Size (x,y,z)         1D=(65536), 2D=(65536, 65536), 3D=(4096, 4096, 4096)
  Maximum Layered 1D Texture Size, (num) layers  1D=(16384), 2048 layers
  Maximum Layered 2D Texture Size, (num) layers  2D=(16384, 16384), 2048 layers
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total number of registers available per block: 65536
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  2048
  Maximum number of threads per block:           1024
  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
  Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and kernel execution:          Yes with 1 copy engine(s)
  Run time limit on kernels:                     Yes
  Integrated GPU sharing Host Memory:            No
  Support host page-locked memory mapping:       Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Disabled
  Device supports Unified Addressing (UVA):      Yes
  Device PCI Domain ID / Bus ID / location ID:   0 / 1 / 0
  Compute Mode:
     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 7.5, CUDA Runtime Version = 7.5, NumDevs = 1, 

Device0 = Quadro K1100M

Result = PASS

PostgreSQL says:

LOG:  CUDA Runtime version: 7.5.0
LOG:  NVIDIA driver version: 352.39
LOG:  GPU0 Quadro K1100M (384 CUDA cores, 705MHz), L2 256KB, RAM 2047MB (128bits, 1400MHz), capability 3.0
LOG:  NVRTC - CUDA Runtime Compilation vertion 7.5

Now the tests:

CREATE TABLE t_test AS
SELECT x, 'a'::char(100) AS y, 'b'::char(100) AS z
FROM generate_series(1, 5000000) AS x
ORDER BY random();

SET pg_strom.enabled = OFF;

EXPLAIN ANALYZE SELECT count(*)
FROM   t_test
WHERE sqrt(x) > 0

GROUP BY y;

HashAggregate  (cost=242892.34..242892.35 rows=1 width=101) (actual time=2550.064..2550.064 rows=1 loops=1)
  Group Key: y
  ->  Seq Scan on t_test  (cost=0.00..234559.00 rows=1666667 width=101) (actual time=0.016..779.110 rows=5000000 loops=1)
        Filter: (sqrt((x)::double precision) > '0'::double precision)"
Planning time: 0.104 ms

Execution time: 2550.131 ms

SET pg_strom.enabled = ON;

EXPLAIN ANALYZE SELECT count(*)
FROM   t_test
WHERE sqrt(x) > 0

GROUP BY y;

HashAggregate  (cost=177230.88..177230.89 rows=1 width=101) (actual time=25393.766..25393.767 rows=1 loops=1)
  Group Key: y
  ->  Custom Scan (GpuPreAgg)  (cost=13929.24..173681.39 rows=260 width=408) (actual time=348.584..25393.123 rows=76 loops=1)
        Bulkload: On (density: 100.00%)"
        Reduction: Local + Global
        Device Filter: (sqrt((x)::double precision) > '0'::double precision)"
        ->  Custom Scan (BulkScan) on t_test  (cost=9929.24..168897.54 rows=5000000 width=101) (actual time=4.336..628.920 rows=5000000 loops=1)"
Planning time: 0.330 ms

Execution time: 25488.189 ms

Whoa, pg_strom is 10x slower for me. Why? I don't know.

It could be a driver issue, because I see heavy CPU spikes during the query - up to 100% on some cores. The whole system becomes tangibly unresponsive. My driver version is 352.39 instead of 352.30.

It could also be that in the original test a comparatively weak CPU (unspecified 4 GHz AMD, I therefore assume a FX8350) with a powerful GPU (Nvidia GTX970, 4GB) were used, while my test used a comparatively powerful CPU (Intel core i7-4700MQ) and a weak GPU (Nvidia Quadro K1100M, 2GB).

But does that explain the CPU spikes? Well, probably we see suboptimal host<->device memory transfers here, the GTX970 not only has double the memory, it has also double the bus width.

I second that we might be seeing the future here, but it's not ready for prime time yet...

UPDATE: Please see the next post also.

Thursday, September 10, 2015

Finally - Cilk(TM) Plus in gcc 5.1

gcc 5.1 now has CilkTM Plus support.

Thread level parallelism and vectorization in 27 lines of c++:

#include <iostream>
#include <cilk/cilk.h>

using namespace std;

int main()
{
    int a[8];
    int b[8];

    b[0:4] = 1;
    b[4:4] = -1;

    cilk_for (int i = 0; i < 8; ++i)
    {
        a[i] = i*i;
    }

    a[:] = a[:] * b[:];

    for (int i = 0; i < 8; ++i)
    {
        cout << a[i] << endl;
    }

    return 0;
}

I really, really love this...

Tuesday, August 25, 2015

The 'other' cloud - parasitic storage as a service

Since some NoSQL products by default ship with no security at all, this was no real surprise after MongoDB, but the magnitude is astounding.

How about using this for something useful?
  1. Scan the Internet for known products/servers that allow unconditional read/write access
  2. Write storage adapters
  3. Invent a mechanism to store data encrypted and with some redundancy, in case someone gets a wake up call
  4. Invent a mechanism to rebalance storage if servers become unavailable or new ones are added to the list of storage nodes
  5. Build a service around 1, 2, 3, and 4
There it is, the 'other' cloud, at almost no cost except bandwidth...

But of course, this is purely fictional. The bad guys don't have good ideas and the good guys won't do it. There is no illegal data stored in your session cache.

Just keep ignoring the fine manuals and carry on. Nobody needs database administrators, everybody knows that...

Tuesday, August 18, 2015

So, what is TABLESAMPLE actually good for?

Q: So, what is TABLESAMPLE actually good for?

A: To speed things up that can be answered from a small sample instead of the whole population!

Like so:

CREATE TABLE public.ts_test
(
  id integer NOT NULL DEFAULT nextval('ts_test_id_seq'::regclass),
  val double precision,
  CONSTRAINT ts_test_pkey PRIMARY KEY (id)
)
WITH (
  OIDS=FALSE
);

Insert 100 million random values in the range 0.0 <= x < 1.0:

INSERT INTO public.ts_test (val)
SELECT
    random()
FROM
    generate_series(1, 100000000);

And now, some queries to get the standard deviation.

For the whole population:

SELECT round(stddev_pop(val)::numeric,5) FROM ts_test;

0.28869 in 15005 ms
Error: 0%
Speedup: 1x

For a sample of size 20000 (good for 1% error and 99% confidence):

SELECT round(stddev_samp(val)::numeric,5) FROM ts_test TABLESAMPLE BERNOULLI(0.02);

0.28867 in 2387 ms
Error: < 1%
Speedup: 6.3x

SELECT round(stddev_samp(val)::numeric,5) FROM ts_test TABLESAMPLE SYSTEM(0.02);

0.28791 in 25 ms
Error: < 1%
Speedup: 600x

With a table that large, both methods BERNOULLI and SYSTEM are accurate enough to give results well below the 1% error target.

(For potential problems with SYSTEM, see this post)

SYSTEM aces BERNOULLI with a speedup of 600x vs. 6.3x relative to a full population query.

And this is hard to beat...

Addendum: TABLESAMPLE only works on Tables and materialized Views!

Monday, August 17, 2015

ToroDB up and running in five simple steps



A working ToroDB installation in five steps.

You'll need:

ToroDB from 8Kdata:
https://github.com/torodb/torodb

A working PostgreSQL instance, 9.4 or better:
http://www.postgresql.org/download/

A suitable JRE, version 6 or higher. It has been mainly tested with Oracle JRE 8.

The mongo shell and mongoimport from MongoDB:
https://www.mongodb.org/

Some test data from citusdata:
http://examples.citusdata.com/customer_reviews_nested_1998.json.gz

Music:
"Hitman" Kevin MacLeod from incompetech.com
Licensed under Creative Commons: By Attribution 3.0
http://creativecommons.org/licenses/by/3.0/

Tuesday, August 4, 2015

...gently down the stream

So, for the test I changed the location to San Francisco and after about one hour the table tstest.tstream contains 3705 tweets.

select tag, count(*) as tag_count from tstest.tagstream group 
              by tag order by tag_count desc limit 10;

produces:

"tag";"tag_count" 
"MTVHottest";376
"Hiring";210
"CareerArc";172
"Job";158
"job";157
"Jobs";145
"SanFrancisco";79
"hiring";69
"Sales";40
"SanJose";39

So there is something going on on MTV in the Bay Area. :-)

In a production system, I probably would write a background worker for stream injection instead of an external job.

And following discussions on Database Soup, it could be indeed interesting to configure the stream server for maximum throughput sacrificing durability for speed and then pull any data that should be preserved over a postgres_fdw link into another server with a more conservative setup.

BTW: If you plan to keep way more data in the ringbuffer to dig further into history occasionally, this would be a perfect case for using a long-tailed table.

Row, row, row your boat...


In the announcement of PipelineDB, Josh Berkus gave a hint that -  and how - a similar functionality might be achieved with stock PostgreSQL. Let's see, if we can analyze a continuous twitter stream too...

First, we need a fast table to act as the ringbuffer:

CREATE UNLOGGED TABLE tstest.tstream
(
  rec timestamp with time zone NOT NULL DEFAULT clock_timestamp(),
  tevent jsonb NOT NULL,
  CONSTRAINT pk_tstream PRIMARY KEY (rec)
)
WITH (
  OIDS=FALSE
);


To inject new tweets into the buffer and trim the outdated ones, I used this little injector, written in Java using the hosebird client and the example code provided there.

Once it runs in it's infinite loop, the table is populated with new tweets, while tweets older than one hour are evicted every five minutes by courtesy of a fixed rate timer task. So, the maximum size of the buffer table is one hour and five minutes worth of tweets in the worst case.

The analysis view is taken verbatim from the PipelineDB example:

  CREATE OR REPLACE VIEW tstest.tagstream as
    SELECT jsonb_array_elements(tevent #>
      ARRAY['entities','hashtags']) ->> 'text' AS tag
    FROM tstest.tstream
    WHERE rec >
          ( clock_timestamp() - interval '1 hour' );


Plus the missing ')' before ->> 'text'. :-)


As is the most popular tags query:

select tag, count(*) as tag_count from tstest.tagstream group
              by tag order by tag_count desc limit 10;


Does it work? Yes.

Since I don't have access to my development machine right now, stay tuned for Part 2 with some results.

Thursday, July 23, 2015

Deriving the elemental composition from a molformula in pgchem::tigress

pgchem::tigress can generate molecular formulae like C3H6NO2- from chemical structures.

But what if we need access to the elemental composition as a relation, e.g:

elementcount
C 3
N 1
O 2

Fortunately, PostgreSQL is awesome:

CREATE OR REPLACE FUNCTION elemental_composition(molformula TEXT)
  RETURNS TABLE(element TEXT, count INTEGER) AS
$BODY$
DECLARE token TEXT[];
DECLARE elements TEXT[];
BEGIN
elements := ARRAY['C','N','O','P','S','Cl']; --expand as needed
molformula := REPLACE(REPLACE(molformula,'-',''),'+','');

FOREACH element IN ARRAY elements LOOP
count := 1;
token := REGEXP_MATCHES(molformula, element || '[\d?]*');

IF (token[1] IS NOT NULL) THEN
    token :=
REGEXP_MATCHES(token[1],'[0-9]+');
        IF (token[1] iS NOT NULL) THEN
            count := token[1]::INTEGER;
        END IF;
RETURN NEXT;
END IF;
END LOOP;
RETURN;
END;
$BODY$
  LANGUAGE plpgsql IMMUTABLE STRICT
  COST 1000;


SELECT * FROM elemental_composition('C3H6NO2-');

And that's it. Did I already mention that PostgreSQL is awesome? :-)

Tuesday, July 21, 2015

1st impression of pg_shard

1st impression of pg_shard:

  1. It works as described, documented limitations apply
  2. Status '3' of a shard_placement means inactive, '1' means online
  3. Read the issues list - unless you want to discover them yourselves :-)
  4. If a whole worker node goes offline this may go unnoticed, since there seems to be no heartbeat between the head and worker nodes unless you try to write data
  5. It already supports range based sharding but you have to activate it manually like so
  6. It does not support prepared statements at the moment

Thursday, June 25, 2015

nseq - A datatype for the efficient storage of nucleotide sequences in PostgreSQL

The nseq datatype allows to store DNA and RNA sequences consisting of the letters AGCT or AGCU respectively in PostgreSQL.

By encoding four bases per Byte, it uses 75 percent less space on disk than text. While this idea is obvious and far from being novel, it still is one of the most efficient compression schemes for DNA/RNA, if not the most efficient.

As of now, nseq only supports very basic native operations. The main shortcoming is, that it has to be decompressed into text hence expanding by 4x, for e. g. substring operations and the like.

This will change.

Enough said - here it is...

Check out PostBIS as well, it already has much more features.

Wednesday, June 3, 2015

Update to pgchem::tigress isotope pattern generation code

The isotope_pattern() function now contains data for the stable isotopes of 82 elements.

Thus, it fully supports HMDB, UNPD and ChEBI (except the transuranics) and is available here.

The individually affected files are obwrapper.cpp and libmercury++.h, in case you want to update your installation in-place.

Wednesday, March 11, 2015

The Long Tail - vertical table partitioning III

"In part III I'll try to give a raw estimate how big the performance penalty is when the partitioned table switches from fast to slow storage during a query. And there is another important problem to be solved..."

It took some time...

Since I don't have a SMR HDD, I used a standard 5400 RPM 2TB HDD together with a 128 GB SSD instead. Both drives are attached to SATA2 ports.

The test machine is a Fujitsu Workstation with one Intel i7-4770, 16 GB RAM, running Windows 8.1 64 Bit and PostgreSQL 9.4.1 64 Bit.

CrystalDiskMark gives the following performance data for the pair:

HDD:

           Sequential Read :   139.764 MB/s
          Sequential Write :   128.897 MB/s
         Random Read 512KB :    17.136 MB/s
        Random Write 512KB :    71.074 MB/s
    Random Read 4KB (QD=1) :     0.280 MB/s [    68.3 IOPS]
   Random Write 4KB (QD=1) :     0.642 MB/s [   156.8 IOPS]
   Random Read 4KB (QD=32) :     0.999 MB/s [   243.8 IOPS]
  Random Write 4KB (QD=32) :     0.889 MB/s [   217.0 IOPS]

SSD:

           Sequential Read :   431.087 MB/s
          Sequential Write :   299.641 MB/s
         Random Read 512KB :   268.955 MB/s
        Random Write 512KB :   293.199 MB/s
    Random Read 4KB (QD=1) :    24.519 MB/s [  5986.0 IOPS]
   Random Write 4KB (QD=1) :    67.369 MB/s [ 16447.6 IOPS]
   Random Read 4KB (QD=32) :   328.456 MB/s [ 80189.5 IOPS]
  Random Write 4KB (QD=32) :   205.667 MB/s [ 50211.6 IOPS]


As you can see, the SSD is about 3x faster reading sequentially and about 85x - 328x faster reading random blocks, depending on the command queue depth.

PostgreSQL is running with

shared_buffers = 128kB # min 128kB

to minimize cache hits since I want to see how the disks perform.

For the 'benchmark' I first set up two tablespaces, hdd and ssd. Then the long tailed table was created as shown in the previous posts:

CREATE UNLOGGED TABLE fast
(
  id serial NOT NULL,
  value real
)
WITH (
  OIDS=FALSE
)
TABLESPACE ssd;

CREATE UNLOGGED TABLE slow
(
)
INHERITS (fast)
TABLESPACE hdd;

Then I created one billion rows in fast and slow:

INSERT INTO fast (value) SELECT random()*1000000000 FROM generate_series(1,1000000000);

INSERT INTO slow SELECT * FROM fast;

First, I wanted to see how each table performs with full table scans. All these numbers are ten-run averages, that's one reason why it took some time :-)

SELECT avg(value) FROM ONLY slow; -- 210 sec

SELECT avg(value) FROM ONLY fast; -- 90 sec

Which pretty much reflects the 3/1 ratio from CrystalDiskMark.

For the random read test, I created primary keys on the id columns of each table, but put their underlying indexes on the SSD to be fair. Then, 10000 random rows where selected from the whole table:

SELECT avg(value) FROM ONLY fast WHERE id IN (SELECT 1+floor(random()*1000000000)::integer FROM generate_series(1,10000)); -- 6 sec

SELECT avg(value) FROM ONLY slow WHERE id IN (SELECT 1+floor(random()*1000000000)::integer FROM generate_series(1,10000)); -- 100 sec

Here, the HDD is about 16x slower than the SSD.

And from the top 20% of each table:

SELECT avg(value) FROM ONLY fast WHERE id IN (SELECT 800000001+floor(random()*200000000)::integer FROM generate_series(1,10000)); -- 5 sec


SELECT avg(value) FROM ONLY slow WHERE id IN (SELECT 800000001+floor(random()*200000000)::integer FROM generate_series(1,10000)); -- 80 sec

Again, the HDD is about 16x slower than the SSD.

Knowing how each table performs, I then moved the top 20% of rows into fast and left the remaining 80% in slow, thus creating the long tailed table.

SELECT avg(value) FROM fast; -- 178 sec

Surprise, surprise, 210*0.8=168, 90*0.2=18, 168+18=186. The long tailed table is not slower than it's individual parts! 

And with random reads?

Whole table:

SELECT avg(value) FROM fast WHERE id IN (SELECT 1+floor(random()*1000000000)::integer FROM generate_series(1,10000)); -- 50 sec

It's way faster than the table on the SSD alone. This seems to be an anomaly I cannot explain at the moment. Either it helps a lot to have two indexes instead of one, or the most rows where selected from the SSD part.

Top 20% only:

SELECT avg(value) FROM fast WHERE id IN (SELECT 800000001+floor(random()*200000000)::integer FROM generate_series(1,10000)); -- 4 sec

A bit faster than having the whole table on SSD.

Conclusion:

Aside from the (positive) anomaly with random reads on the whole long tailed table, using a long tailed table is at least not slower than a vanilla table but you can put your data graveyard on slow but inexpensive storage while having the hot rows and the indexes on the fast drives. Completely transparent for the clients.

However, one question remains...

Is it possible to ask PostgreSQL what the most frequently accessed rows of a table are?

If so, the background worker could balance the long tailed table without having to know a specific, application dependent access pattern!

An that would be the icing on the cake...

A quick glance over pg_stat* and pg_statio* didn't show anything usable for this task, but I'm open for suggestions. :-)