Welcome to episode 17 of the Steampipe+Mastodon series, in which we present a new subplot: timeline history. So far, the examples I have actually revealed and discussed deal with present timelines. We’ve seen SQL inquiries that fetch arise from real-time calls to the Mastodon API, and Steampipe control panels that display those results. However Steampipe isn’t simply an API siphon, it’s likewise a Postgres database. As such it supports the transient tables produced by Steampipe’s foreign information wrapper and plugins, however also allows you to produce your own native tables as well. And you can utilize those native tables to build up data from the short-term foreign tables.Because conserving and searching Mastodon data is a questionable subject in the fediverse– none of us wishes to recapitulate Huge Social– I’ve focused thus far on questions that explore current Mastodon flow, of which there are plenty more to write. However no one ought to mind me remembering my own home timeline, so a few weeks ago I made a tool to read it per hour and include new toots to a Postgres table.Before you can add any toots to a table, obviously, you have actually got to produce that table.
Here’s how I made this one.create table mastodon_home_timeline as select * from mastodon_toot_home limit 200 When produced
, the table can be updated with brand-new toots like so.with information as(select account,– more– columns username from mastodon_toot_home limitation 200)insert into mastodon_home_timeline (account,– more– columns username )choose * from data where id not in(choose t.id from mastodon_home_timeline t) To run that question from a crontab, on a device where Steampipe is set up, wait as mastodon_home_timeline
. sql, then arrange it.15 * * * * cd/ home/jon/mastodon; steampipe query mastodon_home_timeline. sql That’s it! Now the number reported
by choose count (*)from mastodon_home_timeline is growing hourly. I’ve
just been collecting toots for a couple of weeks, and have not yet started to check out that information yet; we’ll see what occurs
when we arrive. Meanwhile, though, I wish to demonstrate how such exploration can be a team exercise.A buddy of mine, whom I’ll call Elvis, shares my interest in teasing out connections amongst individuals, servers, and hashtags. He might catch his own timeline utilizing
the approach shown here. But given that we’ll be looking at this information together, we concurred that I’ll gather both our timelines. To enable that, he shared a(revokable )Mastodon API token that I’ve utilized to set up Steampipe with qualifications for both our accounts. connection”mastodon_social_jon”mastodon.social “access_token= “… ” connection”mastodon_social_elvis” Steampipe’s foreign data wrapper turns each of these named connections into its own Postgres schema. Athough we happen to share the same home server
, by the way, we needn’t. A team working together like this might pool timelines from mastodon.social and hachyderm.io and fosstodon.org and any other Mastodon-API-compatible server. (You can do the exact same thing with AWS or Slack or GitHub or other kind of account by specifying several connections. Steampipe makes API calls simultaneously across parallel connections.
)With this setup I can read my timeline like so.select * from mastodon_social_jon. mastodon_toot_home limitation 200 And Elvis’s like so. select * from mastodon_social_elvis. mastodon_toot_home limit 200 If I want to query both in genuine time, for example to count the combined total, I can
utilize a SQL UNION. Or I can specify an umbrella connection that aggregates these two.connection” all_mastodon” plugin=”mastodon”type=”aggregator” connections=[ “mastodon_social_jon”, “mastodon_social_elvis “] connection”mastodon_social_jon”connection “mastodon_social_elvis “mastodon.social”access_token=” …” Now the inquiry select * from all_mastodon. mastodon_toot_home limitation 200 makes API contacts behalf of both accounts– in parallel– and integrates the outcomes. When we follow the resulting URLs in order to reply or improve, we’ll do so as individual identities. And we’ll be able
to utilize Steampipe questions and dashboards because very same single-user mode. But we’ll also have the ability to pool our timelines and point our inquiries and dashboards at the combined history.Will that prove fascinating? Useful? That remains to be seen. I think it’s one of lots of experiments worth attempting as the fediverse sorts itself out. And I see Steampipe as one laboratory in which to run such experiments. With SQL as the abstraction over APIs, aggregation of connections, and dashboards as code, you have all the ingredients needed to repeat quickly, at low expense, towards shared Mastodon spaces customized for teams or groups.This series: Autonomy, packet size, friction, fanout, and velocity Mastodon, Steampipe, and RSS Browsing the fediverse A Bloomberg terminal for Mastodon Produce your own Mastodon UX Lists and individuals on Mastodon The number of individuals in my Mastodon feed likewise tweeted today? Instance-qualified Mastodon URLs Mastodon relationship charts Working with Mastodon lists Images thought about hazardous( sometimes)Mapping the larger fediverse Protocols, APIs, and conventions News in the fediverse Mapping