Compare commits
9 Commits
37b058af18
...
327140393f
Author | SHA1 | Date |
---|---|---|
Bram van den Heuvel | 327140393f | |
Bram van den Heuvel | 8e647a870e | |
Bram van den Heuvel | a68253cc43 | |
Bram van den Heuvel | da0fe70def | |
Bram van den Heuvel | d12466e82a | |
Bram van den Heuvel | 4b92d9ea20 | |
Bram van den Heuvel | 4777de5b67 | |
Bram van den Heuvel | 865e83cdae | |
Bram van den Heuvel | 2cb21dc102 |
3
elm.json
3
elm.json
|
@ -19,7 +19,8 @@
|
|||
"elm/http": "2.0.0 <= v < 3.0.0",
|
||||
"elm/json": "1.0.0 <= v < 2.0.0",
|
||||
"elm/time": "1.0.0 <= v < 2.0.0",
|
||||
"elm/url": "1.0.0 <= v < 2.0.0"
|
||||
"elm/url": "1.0.0 <= v < 2.0.0",
|
||||
"miniBill/elm-fast-dict": "1.0.0 <= v < 2.0.0"
|
||||
},
|
||||
"test-dependencies": {}
|
||||
}
|
||||
|
|
|
@ -103,3 +103,41 @@ When building a task chain, we start with an empty context of type `Context {}`
|
|||
Effectively, this means we no longer need to specify a separate data type that stores any relevant information that is already available in the `Vault`. Instead, the `Vault`'s phantom type specifies that functions can be used if and only if certain values are available.
|
||||
|
||||
The phantom type is exclusively an internal type and will therefore never be communicated to the end user of the library.
|
||||
|
||||
### The Cmd type needs improved transparency
|
||||
|
||||
It is difficult to build a complex Vault type without giving up on transparency.
|
||||
One of Elm's strengths is its power to be completely pure, guaranteeing the
|
||||
user that a library doesn't dump all of their passwords onto the world wide web.
|
||||
By expecting the user to constantly accept anything that passes between the
|
||||
Vault and the Elm runtime, this takes that core value away.
|
||||
|
||||
I have spent months trying to talk to different people on how to tackle this,
|
||||
and my conclusion is as follows: it is simply easier to trust the Vault's
|
||||
integrity, especially given that the library is open source. However, to give
|
||||
more insight into what happened, every result comes back with effective ways to
|
||||
debug the connection.
|
||||
|
||||
Instead of using the core's tasks, we introduce a more complex `Task` type:
|
||||
|
||||
```elm
|
||||
type MatrixTask = MatrixTask -- ...
|
||||
```
|
||||
|
||||
This task supports the same operations, however it is a bit more intelligent
|
||||
after its execution:
|
||||
|
||||
1. The `MatrixTask` remembers what HTTP request(s) it has executed. Each HTTP
|
||||
request can be exported in different formats:
|
||||
|
||||
- A string format `curl` command
|
||||
- A `Task` type so it can be run again
|
||||
- A `MatrixTask` type to recreate the same command
|
||||
|
||||
2. The `MatrixTask` collects a human readable log, catching both acceptable logs
|
||||
and error logs using a `List (Result String String)` type.
|
||||
|
||||
Hopefully, these functions should make the `MatrixTask` type sufficiently
|
||||
transparent to support trust, security and the ability to debug network traffic
|
||||
as an end user.
|
||||
|
||||
|
|
|
@ -31,30 +31,52 @@ type as a message to the Vault to update certain information.
|
|||
-}
|
||||
|
||||
import Http
|
||||
import Internal.Api.Helpers as Helpers
|
||||
import Internal.Tools.Context as Context exposing (Context)
|
||||
import Internal.Tools.Exceptions as X
|
||||
import Task exposing (Task)
|
||||
|
||||
{-| The TaskChain is a piece in the long chain of tasks that need to be completed.
|
||||
The type defines four variables:
|
||||
|
||||
- `err` value that may arise on an error
|
||||
- `u` the update msg that should be returned
|
||||
- `a` phantom type before executing the chain's context
|
||||
- `b` phantom type after executing the chain's context
|
||||
-}
|
||||
type alias TaskChain err u a b =
|
||||
Context a -> Task (FailedChainPiece err u) (TaskChainPiece u a b)
|
||||
|
||||
{-| An IdemChain is a TaskChain that does not influence the chain's context
|
||||
|
||||
- `err` value that may arise on an error
|
||||
- `u` the update msg that should be executed
|
||||
- `a` phantom type before, during and after the chain's context
|
||||
-}
|
||||
type alias IdemChain err u a =
|
||||
TaskChain err u a a
|
||||
|
||||
|
||||
{-| A CompleteChain is a complete snake that can be safely run and executed by
|
||||
the Elm core.
|
||||
-}
|
||||
type alias CompleteChain u =
|
||||
TaskChain () u {} {}
|
||||
|
||||
{-| A TaskChainPiece is a piece that updates the chain's context.
|
||||
|
||||
Once a chain is executed, the process will add the `messages` value to its list
|
||||
of updates, and it will update its context according to the `contextChange`
|
||||
function.
|
||||
-}
|
||||
type alias TaskChainPiece u a b =
|
||||
{ contextChange : Context a -> Context b
|
||||
, messages : List u
|
||||
}
|
||||
|
||||
|
||||
{-| A FailedChainPiece initiates an early breakdown of a chain. Unless caught,
|
||||
this halts execution of the chain. The process will add the `messages` value to
|
||||
its list of updates, and it will return the given `err` value for a direct
|
||||
explanation of what went wrong.
|
||||
-}
|
||||
type alias FailedChainPiece err u =
|
||||
{ error : err, messages : List u }
|
||||
|
||||
|
|
|
@ -37,46 +37,3 @@ ratelimited task =
|
|||
_ ->
|
||||
Task.fail e
|
||||
)
|
||||
|
||||
|
||||
{-| Sometimes, you don't really care if something went wrong - you just want to try again.
|
||||
|
||||
This task will only return an error if it went wrong on the n'th attempt.
|
||||
|
||||
-}
|
||||
retryTask : Int -> Task X.Error a -> Task X.Error a
|
||||
retryTask n task =
|
||||
if n <= 0 then
|
||||
task
|
||||
|
||||
else
|
||||
Task.onError
|
||||
(\err ->
|
||||
let
|
||||
retry : Task X.Error a
|
||||
retry =
|
||||
retryTask (n - 1) task
|
||||
in
|
||||
case err of
|
||||
X.InternetException (Http.BadUrl _) ->
|
||||
Task.fail err
|
||||
|
||||
X.InternetException _ ->
|
||||
retry
|
||||
|
||||
X.SDKException (X.ServerReturnsBadJSON _) ->
|
||||
retry
|
||||
|
||||
X.SDKException _ ->
|
||||
Task.fail err
|
||||
|
||||
X.ServerException _ ->
|
||||
Task.fail err
|
||||
|
||||
X.ContextFailed _ ->
|
||||
Task.fail err
|
||||
|
||||
X.UnsupportedSpecVersion ->
|
||||
Task.fail err
|
||||
)
|
||||
task
|
||||
|
|
|
@ -20,9 +20,9 @@ Note that **under development** doesn't always mean that it _will be_ supported.
|
|||
|
||||
| **Spec version** | | Syncing | Redaction |
|
||||
| ---------------- | - | ------- | --------- |
|
||||
| v1.8 || ⚡ | ✔️ |
|
||||
| v1.7 || ⚡ | ✔️ |
|
||||
| v1.6 || ⚠️ | ✔️ |
|
||||
| v1.8 || ✔️ | ✔️ |
|
||||
| v1.7 || ✔️ | ✔️ |
|
||||
| v1.6 || ✔️ | ✔️ |
|
||||
| v1.5 || ✔️ | ✔️ |
|
||||
| v1.4 || ✔️ | ✔️ |
|
||||
| v1.3 || ✔️ | ✔️ |
|
||||
|
@ -64,8 +64,8 @@ Note that **under development** doesn't always mean that it _will be_ supported.
|
|||
|
||||
| **Spec version** | | Event | Joined members | Event at timestamp |
|
||||
| ---------------- | - | ----- | -------------- | ------------------ |
|
||||
| v1.8 || ✔️ | ✔️ | ⚡ |
|
||||
| v1.7 || ✔️ | ✔️ | ⚡ |
|
||||
| v1.8 || ✔️ | ✔️ | ⚠️ |
|
||||
| v1.7 || ✔️ | ✔️ | ⚠️ |
|
||||
| v1.6 || ✔️ | ✔️ | ⚠️ |
|
||||
| v1.5 || ✔️ | ✔️ | ⛔ |
|
||||
| v1.4 || ✔️ | ✔️ | ⛔ |
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
module Internal.Api.Snackbar exposing (..)
|
||||
|
||||
{-| The snackbar module helps wraps relevant credentials, access tokens, refresh tokens and more around internal types.
|
||||
{-| The snackbar module helps wraps relevant credentials, access tokens,
|
||||
refresh tokens and more around internal types.
|
||||
|
||||
Vault, Room and Event types don't need access to API tokens,
|
||||
but a user may way to redact an event, leave a room or reject an invite.
|
||||
|
@ -14,6 +15,7 @@ without needing to update every data type whenever any of the tokens change.
|
|||
|
||||
import Dict exposing (Dict)
|
||||
import Internal.Api.Versions.V1.Versions as V
|
||||
import Internal.Config.DefaultSettings as DS
|
||||
import Internal.Tools.LoginValues as Login exposing (AccessToken(..))
|
||||
import Task exposing (Task)
|
||||
|
||||
|
@ -27,9 +29,14 @@ type Snackbar a vu
|
|||
, homeserver : String
|
||||
, transactionOffset : Int
|
||||
, vs : Maybe V.Versions
|
||||
, settings : Settings
|
||||
}
|
||||
|
||||
|
||||
type alias Settings =
|
||||
{ syncTimeout : Int }
|
||||
|
||||
|
||||
accessToken : Snackbar a vu -> AccessToken
|
||||
accessToken (Snackbar { access }) =
|
||||
access
|
||||
|
@ -92,6 +99,9 @@ init data =
|
|||
, failedTasks = Dict.empty
|
||||
, failedTasksOffset = 0
|
||||
, homeserver = data.baseUrl
|
||||
, settings =
|
||||
{ syncTimeout = DS.syncTimeout
|
||||
}
|
||||
, transactionOffset = 0
|
||||
, vs = Nothing
|
||||
}
|
||||
|
@ -105,6 +115,7 @@ map f (Snackbar data) =
|
|||
, failedTasks = data.failedTasks
|
||||
, failedTasksOffset = 0
|
||||
, homeserver = data.homeserver
|
||||
, settings = data.settings
|
||||
, transactionOffset = data.transactionOffset
|
||||
, vs = data.vs
|
||||
}
|
||||
|
@ -135,6 +146,11 @@ setTransactionOffset i (Snackbar data) =
|
|||
Snackbar { data | transactionOffset = max (data.transactionOffset + 1) (i + 1) }
|
||||
|
||||
|
||||
updateSettings : (Settings -> Settings) -> Snackbar a vu -> Snackbar a vu
|
||||
updateSettings f (Snackbar ({ settings } as data)) =
|
||||
Snackbar { data | settings = f settings }
|
||||
|
||||
|
||||
userId : Snackbar a vu -> Maybe String
|
||||
userId (Snackbar { access }) =
|
||||
Login.getUserId access
|
||||
|
|
|
@ -33,3 +33,10 @@ supportedVersions =
|
|||
defaultDeviceName : String
|
||||
defaultDeviceName =
|
||||
"Elm Matrix SDK (v" ++ currentVersion ++ ")"
|
||||
|
||||
|
||||
{-| The amount of seconds that the Matrix Vault should wait for a response from the Matrix homeserver.
|
||||
-}
|
||||
syncTimeout : Int
|
||||
syncTimeout =
|
||||
10
|
||||
|
|
|
@ -16,7 +16,7 @@ import Internal.Values.Room as Internal
|
|||
import Internal.Values.StateManager as StateManager
|
||||
import Internal.Values.Timeline as Timeline
|
||||
import Json.Encode as E
|
||||
import Task exposing (Task)
|
||||
import Task
|
||||
|
||||
|
||||
{-| The `Room` type represents a Matrix Room. It contains context information
|
||||
|
|
|
@ -0,0 +1,56 @@
|
|||
module Internal.Tools.DefaultDict exposing (..)
|
||||
|
||||
import FastDict as Dict exposing (Dict)
|
||||
|
||||
{-| A dictionary of keys and values that includes a default when a key doesn't exist.
|
||||
-}
|
||||
type DefaultDict k v
|
||||
= DefaultDict
|
||||
{ content : Dict k v
|
||||
, default : v
|
||||
}
|
||||
|
||||
{-| Create an empty dictionary that has a default value.
|
||||
-}
|
||||
empty : v -> DefaultDict k v
|
||||
empty v =
|
||||
DefaultDict
|
||||
{ content = Dict.empty
|
||||
, default = v
|
||||
}
|
||||
|
||||
{-| Get the value associated with the key. Uses the default if not found. -}
|
||||
get : comparable -> DefaultDict comparable v -> v
|
||||
get k (DefaultDict data) =
|
||||
Dict.get k data.content |> Maybe.withDefault data.default
|
||||
|
||||
{-| Insert a key-value pair into a dictionary with a default.
|
||||
-}
|
||||
insert : comparable -> v -> DefaultDict comparable v -> DefaultDict comparable v
|
||||
insert k v (DefaultDict data) =
|
||||
DefaultDict { data | content = Dict.insert k v data.content }
|
||||
|
||||
{-| "Remove" a value by making its value synchronize with the default value.
|
||||
-}
|
||||
remove : comparable -> DefaultDict comparable v -> DefaultDict comparable v
|
||||
remove k (DefaultDict data) =
|
||||
DefaultDict { data | content = Dict.remove k data.content }
|
||||
|
||||
{-| Update the default value of all unset keys.
|
||||
-}
|
||||
setDefault : v -> DefaultDict k v -> DefaultDict k v
|
||||
setDefault v (DefaultDict data) =
|
||||
DefaultDict { data | default = v }
|
||||
|
||||
{-| Update the value of a dictionary. The returned (or received) value is `Nothing`,
|
||||
it means the key synchronizes with the default value.
|
||||
-}
|
||||
update : comparable -> (Maybe v -> Maybe v) -> DefaultDict comparable v -> DefaultDict comparable v
|
||||
update k fv (DefaultDict data) =
|
||||
DefaultDict { data | content = Dict.update k fv data.content }
|
||||
|
||||
{-| Update the default value.
|
||||
-}
|
||||
updateDefault : (v -> v) -> DefaultDict k v -> DefaultDict k v
|
||||
updateDefault f (DefaultDict data) =
|
||||
DefaultDict { data | default = f data.default }
|
|
@ -0,0 +1,384 @@
|
|||
module Internal.Tools.Filters.Filter exposing (..)
|
||||
|
||||
import Internal.Tools.Filters.SpecObjects as SO
|
||||
import Internal.Tools.Filters.SimpleFilter as SF exposing (SimpleFilter)
|
||||
import Internal.Tools.SpecEnums as Enums
|
||||
|
||||
{-| Event filters tell the API what events to look for,
|
||||
but specifically for events that are unrelated to any room.
|
||||
-}
|
||||
type EventFilter
|
||||
= EventFilter
|
||||
{ limit : Maybe Int
|
||||
, senders : SimpleFilter String
|
||||
, types : SimpleFilter String
|
||||
}
|
||||
|
||||
{-| The final type dictates how everything else behaves.
|
||||
-}
|
||||
type Filter =
|
||||
Filter
|
||||
{ accountData : EventFilter
|
||||
, presence : EventFilter
|
||||
, room : RoomFilter
|
||||
}
|
||||
|
||||
{-| RoomFilter types tell the API what is considered relevant in a room,
|
||||
and which rooms to include.
|
||||
-}
|
||||
type RoomFilter
|
||||
= RoomFilter
|
||||
{ accountData : RoomEventFilter
|
||||
, ephemeral : RoomEventFilter
|
||||
, rooms : SimpleFilter String
|
||||
, timeline : RoomEventFilter
|
||||
}
|
||||
|
||||
{-| RoomEventFilter types tell the API what events to look for,
|
||||
and what ones to ignore.
|
||||
-}
|
||||
type RoomEventFilter
|
||||
= RoomEventFilter
|
||||
{ lazyLoadMembers : Bool
|
||||
, limit : Maybe Int
|
||||
, rooms : SimpleFilter String
|
||||
, senders : SimpleFilter String
|
||||
, types : SimpleFilter String
|
||||
}
|
||||
|
||||
allEvents : EventFilter
|
||||
allEvents =
|
||||
EventFilter
|
||||
{ limit = Nothing
|
||||
, senders = SF.all
|
||||
, types = SF.all
|
||||
}
|
||||
|
||||
allFilters : Filter
|
||||
allFilters =
|
||||
Filter
|
||||
{ accountData = allEvents
|
||||
, presence = allEvents
|
||||
, room = allRooms
|
||||
}
|
||||
|
||||
allRooms : RoomFilter
|
||||
allRooms =
|
||||
RoomFilter
|
||||
{ accountData = allRoomEvents
|
||||
, ephemeral = allRoomEvents
|
||||
, rooms = SF.all
|
||||
, timeline = allRoomEvents
|
||||
}
|
||||
|
||||
allRoomEvents : RoomEventFilter
|
||||
allRoomEvents =
|
||||
RoomEventFilter
|
||||
{ lazyLoadMembers = False
|
||||
, limit = Nothing
|
||||
, rooms = SF.all
|
||||
, senders = SF.all
|
||||
, types = SF.all
|
||||
}
|
||||
|
||||
decodeEventFilter : SO.EventFilter -> EventFilter
|
||||
decodeEventFilter data =
|
||||
EventFilter
|
||||
{ limit = data.limit
|
||||
, senders = SF.toSimpleFilter data.senders data.notSenders
|
||||
, types = SF.toSimpleFilter data.types data.notTypes
|
||||
}
|
||||
|
||||
decodeFilter : SO.Filter -> Filter
|
||||
decodeFilter data =
|
||||
Filter
|
||||
{ accountData =
|
||||
data.accountData
|
||||
|> Maybe.map decodeEventFilter
|
||||
|> Maybe.withDefault allEvents
|
||||
, presence =
|
||||
data.presence
|
||||
|> Maybe.map decodeEventFilter
|
||||
|> Maybe.withDefault allEvents
|
||||
, room =
|
||||
data.room
|
||||
|> Maybe.map decodeRoomFilter
|
||||
|> Maybe.withDefault allRooms
|
||||
}
|
||||
|
||||
{-| Decode a RoomFilter from a spec-compliant format.
|
||||
-}
|
||||
decodeRoomFilter : SO.RoomFilter -> RoomFilter
|
||||
decodeRoomFilter data =
|
||||
let
|
||||
decodeREF : Maybe SO.RoomEventFilter -> RoomEventFilter
|
||||
decodeREF =
|
||||
Maybe.map decodeRoomEventFilter >> Maybe.withDefault allRoomEvents
|
||||
in
|
||||
RoomFilter
|
||||
{ accountData = decodeREF data.accountData
|
||||
, ephemeral = decodeREF data.ephemeral
|
||||
, rooms = SF.toSimpleFilter data.rooms data.notRooms
|
||||
, timeline = decodeREF data.timeline
|
||||
}
|
||||
|
||||
{-| Decode a RoomEventFilter from a spec-compliant format.
|
||||
-}
|
||||
decodeRoomEventFilter : SO.RoomEventFilter -> RoomEventFilter
|
||||
decodeRoomEventFilter data =
|
||||
RoomEventFilter
|
||||
{ lazyLoadMembers = data.lazyLoadMembers
|
||||
, limit = data.limit
|
||||
, rooms = SF.toSimpleFilter data.rooms data.notRooms
|
||||
, senders = SF.toSimpleFilter data.senders data.notSenders
|
||||
, types = SF.toSimpleFilter data.types data.notTypes
|
||||
}
|
||||
|
||||
{-| Encode an EventFilter into a spec-compliant format.
|
||||
-}
|
||||
encodeEventFilter : EventFilter -> SO.EventFilter
|
||||
encodeEventFilter (EventFilter data) =
|
||||
{ limit = data.limit
|
||||
, notSenders = SF.toExclude data.senders
|
||||
, notTypes = SF.toExclude data.types
|
||||
, senders = SF.toInclude data.senders
|
||||
, types = SF.toInclude data.types
|
||||
}
|
||||
|
||||
{-| Encode a Filter into a spec-compliant format.
|
||||
-}
|
||||
encodeFilter : Filter -> SO.Filter
|
||||
encodeFilter (Filter data) =
|
||||
{ accountData = Just <| encodeEventFilter data.accountData
|
||||
, eventFields = Nothing
|
||||
, eventFormat = Enums.Client
|
||||
, presence = Just <| encodeEventFilter data.presence
|
||||
, room = Just <| encodeRoomFilter data.room
|
||||
}
|
||||
|
||||
{-| Encode a RoomFilter into a spec-compliant format.
|
||||
-}
|
||||
encodeRoomFilter : RoomFilter -> SO.RoomFilter
|
||||
encodeRoomFilter (RoomFilter data) =
|
||||
{ accountData = Just <| encodeRoomEventFilter data.accountData
|
||||
, ephemeral = Just <| encodeRoomEventFilter data.ephemeral
|
||||
, includeLeave = False
|
||||
, notRooms = SF.toExclude data.rooms
|
||||
, rooms = SF.toInclude data.rooms
|
||||
, state = Just <| encodeRoomEventFilter data.timeline
|
||||
, timeline = Just <| encodeRoomEventFilter data.timeline
|
||||
}
|
||||
|
||||
{-| Encode a RoomEventFilter into a spec-compliant format.
|
||||
-}
|
||||
encodeRoomEventFilter : RoomEventFilter -> SO.RoomEventFilter
|
||||
encodeRoomEventFilter (RoomEventFilter data) =
|
||||
{ containsUrl = Nothing
|
||||
, includeRedundantMembers = False
|
||||
, lazyLoadMembers = data.lazyLoadMembers
|
||||
, limit = data.limit
|
||||
, notRooms = SF.toExclude data.rooms
|
||||
, notSenders = SF.toExclude data.senders
|
||||
, notTypes = SF.toExclude data.types
|
||||
, rooms = SF.toInclude data.rooms
|
||||
, senders = SF.toInclude data.senders
|
||||
, types = SF.toInclude data.types
|
||||
, unreadThreadNotifications = True
|
||||
}
|
||||
|
||||
{-| Flatten a filter.
|
||||
-}
|
||||
flattenFilter : Filter -> List (SimpleFilter String)
|
||||
flattenFilter (Filter f) =
|
||||
List.concat
|
||||
[ flattenEventFilter f.accountData
|
||||
, flattenEventFilter f.presence
|
||||
, flattenRoomFilter f.room
|
||||
]
|
||||
|
||||
{-| Flatten a EventFilter.
|
||||
-}
|
||||
flattenEventFilter : EventFilter -> List (SimpleFilter String)
|
||||
flattenEventFilter (EventFilter f) = [ f.senders, f.types ]
|
||||
|
||||
{-| Flatten a RoomFilter.
|
||||
-}
|
||||
flattenRoomFilter : RoomFilter -> List (SimpleFilter String)
|
||||
flattenRoomFilter (RoomFilter f) =
|
||||
[ f.accountData, f.ephemeral, f.timeline ]
|
||||
|> List.map flattenRoomEventFilter
|
||||
|> List.concat
|
||||
|> (::) f.rooms
|
||||
|
||||
{-| Flatten a RoomEventFilter.
|
||||
-}
|
||||
flattenRoomEventFilter : RoomEventFilter -> List (SimpleFilter String)
|
||||
flattenRoomEventFilter (RoomEventFilter f) = [ f.rooms, f.senders, f.types ]
|
||||
|
||||
{-| Get an intersection of a Filter.
|
||||
-}
|
||||
intersectFilter : Filter -> Filter -> Filter
|
||||
intersectFilter (Filter f1) (Filter f2) =
|
||||
Filter
|
||||
{ accountData = intersectEventFilter f1.accountData f2.accountData
|
||||
, presence = intersectEventFilter f1.presence f2.presence
|
||||
, room = intersectRoomFilter f1.room f2.room
|
||||
}
|
||||
|
||||
{-| Get an intersection of a EventFilter.
|
||||
-}
|
||||
intersectEventFilter : EventFilter -> EventFilter -> EventFilter
|
||||
intersectEventFilter (EventFilter f1) (EventFilter f2) =
|
||||
EventFilter
|
||||
{ limit =
|
||||
case (f1.limit, f2.limit) of
|
||||
(Just l1, Just l2) ->
|
||||
Just (max l1 l2)
|
||||
|
||||
(Just _, Nothing) ->
|
||||
f1.limit
|
||||
|
||||
(Nothing, Just _) ->
|
||||
f2.limit
|
||||
|
||||
(Nothing, Nothing) ->
|
||||
Nothing
|
||||
, senders = SF.intersect f1.senders f2.senders
|
||||
, types = SF.intersect f1.types f2.types
|
||||
}
|
||||
|
||||
{-| Get an intersection of a RoomFilter.
|
||||
-}
|
||||
intersectRoomFilter : RoomFilter -> RoomFilter -> RoomFilter
|
||||
intersectRoomFilter (RoomFilter f1) (RoomFilter f2) =
|
||||
RoomFilter
|
||||
{ accountData = intersectRoomEventFilter f1.accountData f2.accountData
|
||||
, ephemeral = intersectRoomEventFilter f1.ephemeral f2.ephemeral
|
||||
, rooms = SF.intersect f1.rooms f2.rooms
|
||||
, timeline = intersectRoomEventFilter f1.timeline f2.timeline
|
||||
}
|
||||
|
||||
{-| Get an intersection of a RoomEventFilter.
|
||||
-}
|
||||
intersectRoomEventFilter : RoomEventFilter -> RoomEventFilter -> RoomEventFilter
|
||||
intersectRoomEventFilter (RoomEventFilter f1) (RoomEventFilter f2) =
|
||||
RoomEventFilter
|
||||
{ lazyLoadMembers = f1.lazyLoadMembers && f2.lazyLoadMembers
|
||||
, limit =
|
||||
case (f1.limit, f2.limit) of
|
||||
(Just l1, Just l2) ->
|
||||
Just (max l1 l2)
|
||||
|
||||
(Just _, Nothing) ->
|
||||
f1.limit
|
||||
|
||||
(Nothing, Just _) ->
|
||||
f2.limit
|
||||
|
||||
(Nothing, Nothing) ->
|
||||
Nothing
|
||||
, rooms = SF.intersect f1.rooms f2.rooms
|
||||
, senders = SF.intersect f1.senders f2.senders
|
||||
, types = SF.intersect f1.types f2.types
|
||||
}
|
||||
|
||||
{-| Check whether a filter is a subset of another filter.
|
||||
-}
|
||||
isSubSet : Filter -> Filter -> Bool
|
||||
isSubSet f1 f2 =
|
||||
let
|
||||
isSame : List (SimpleFilter String) -> List (SimpleFilter String) -> Bool
|
||||
isSame l1 l2 =
|
||||
case (l1, l2) of
|
||||
(h1 :: t1, h2 :: t2) ->
|
||||
SF.subset h1 h2 && isSame t1 t2
|
||||
([], []) ->
|
||||
True
|
||||
_ ->
|
||||
False
|
||||
in
|
||||
isSame (flattenFilter f1) (flattenFilter f2)
|
||||
|
||||
lazyLoadMembers : Bool -> RoomEventFilter -> RoomEventFilter
|
||||
lazyLoadMembers b (RoomEventFilter data) =
|
||||
RoomEventFilter { data | lazyLoadMembers = b }
|
||||
|
||||
{-| Determine a limit for the amount of events. If no limit is given, the homeserver decides this limit for itself.
|
||||
-}
|
||||
setEventLimit : Maybe Int -> RoomEventFilter -> RoomEventFilter
|
||||
setEventLimit i (RoomEventFilter data) =
|
||||
RoomEventFilter { data | limit = i }
|
||||
|
||||
{-| Include a specific event type.
|
||||
-}
|
||||
withEventType : String -> RoomEventFilter -> RoomEventFilter
|
||||
withEventType x (RoomEventFilter ({ types } as data)) =
|
||||
RoomEventFilter { data | types = SF.with x types }
|
||||
|
||||
{-| Include all event types that haven't been explicitly mentioned.
|
||||
-}
|
||||
withOtherEventTypes : RoomEventFilter -> RoomEventFilter
|
||||
withOtherEventTypes (RoomEventFilter ({ types } as data)) =
|
||||
RoomEventFilter { data | types = SF.withOthers types }
|
||||
|
||||
{-| Include all rooms that haven't been explicitly mentioned.
|
||||
-}
|
||||
withOtherRooms : RoomEventFilter -> RoomEventFilter
|
||||
withOtherRooms (RoomEventFilter ({ rooms } as data)) =
|
||||
RoomEventFilter { data | rooms = SF.withOthers rooms }
|
||||
|
||||
{-| Include all senders that haven't been explicitly mentioned.
|
||||
-}
|
||||
withOtherSenders : RoomEventFilter -> RoomEventFilter
|
||||
withOtherSenders (RoomEventFilter ({ senders } as data)) =
|
||||
RoomEventFilter { data | senders = SF.withOthers senders }
|
||||
|
||||
{-| Include a specific room.
|
||||
-}
|
||||
withRoom : String -> RoomEventFilter -> RoomEventFilter
|
||||
withRoom x (RoomEventFilter ({ rooms } as data)) =
|
||||
RoomEventFilter { data | rooms = SF.with x rooms }
|
||||
|
||||
{-| Include a specific sender.
|
||||
-}
|
||||
withSender : String -> RoomEventFilter -> RoomEventFilter
|
||||
withSender x (RoomEventFilter ({ senders } as data)) =
|
||||
RoomEventFilter { data | senders = SF.with x senders }
|
||||
|
||||
{-| Ignore a specific event type.
|
||||
-}
|
||||
withoutEventType : String -> RoomEventFilter -> RoomEventFilter
|
||||
withoutEventType x (RoomEventFilter ({ types } as data)) =
|
||||
RoomEventFilter { data | types = SF.without x types }
|
||||
|
||||
{-| Ignore all rooms that haven't been explicitly mentioned.
|
||||
-}
|
||||
withoutOtherEventTypes : RoomEventFilter -> RoomEventFilter
|
||||
withoutOtherEventTypes (RoomEventFilter ({ types } as data)) =
|
||||
RoomEventFilter { data | types = SF.withoutOthers types }
|
||||
|
||||
{-| Ignore all rooms that haven't been explicitly mentioned.
|
||||
-}
|
||||
withoutOtherRooms : RoomEventFilter -> RoomEventFilter
|
||||
withoutOtherRooms (RoomEventFilter ({ rooms } as data)) =
|
||||
RoomEventFilter { data | rooms = SF.withoutOthers rooms }
|
||||
|
||||
{-| Ignore all senders that haven't been explicitly mentioned.
|
||||
-}
|
||||
withoutOtherSenders : RoomEventFilter -> RoomEventFilter
|
||||
withoutOtherSenders (RoomEventFilter ({ senders } as data)) =
|
||||
RoomEventFilter { data | senders = SF.withoutOthers senders }
|
||||
|
||||
{-| Ignore a specific room.
|
||||
-}
|
||||
withoutRoom : String -> RoomEventFilter -> RoomEventFilter
|
||||
withoutRoom x (RoomEventFilter ({ rooms } as data)) =
|
||||
RoomEventFilter { data | rooms = SF.without x rooms }
|
||||
|
||||
{-| Ignore a specific sender.
|
||||
-}
|
||||
withoutSender : String -> RoomEventFilter -> RoomEventFilter
|
||||
withoutSender x (RoomEventFilter ({ senders } as data)) =
|
||||
RoomEventFilter { data | senders = SF.without x senders }
|
||||
|
||||
|
|
@ -0,0 +1,85 @@
|
|||
module Internal.Tools.Filters.Main exposing (..)
|
||||
{-| This module contains the main functions used to get, manipulate and change
|
||||
filters according to their needs.
|
||||
-}
|
||||
|
||||
import Internal.Tools.Filters.Filter as F
|
||||
import Internal.Tools.Filters.SimpleFilter as SF
|
||||
|
||||
type alias Filter = F.Filter
|
||||
|
||||
type alias SimpleFilter = SF.SimpleFilter String
|
||||
|
||||
{-| Filter that adds all occurrences by default, but leaves a few ones out.
|
||||
|
||||
When provided with an empty list, the filter allows all types.
|
||||
-}
|
||||
allExcept : List String -> SimpleFilter
|
||||
allExcept =
|
||||
List.foldl SF.without SF.all
|
||||
|
||||
{-| Filter that removes everything by default, but leaves a few ones in.
|
||||
|
||||
When provided with an empty list, the filter allows nothing.
|
||||
-}
|
||||
only : List String -> SimpleFilter
|
||||
only =
|
||||
List.foldl SF.with SF.none
|
||||
|
||||
fromSimpleFilter :
|
||||
{ accountDataTypes : SimpleFilter
|
||||
, presence : { limit : Maybe Int, senders : SimpleFilter, types : SimpleFilter }
|
||||
, ephemeral : { limit : Maybe Int, senders : SimpleFilter, types : SimpleFilter }
|
||||
, roomIds : SimpleFilter
|
||||
, lazyLoadMembers : Bool
|
||||
, roomEvents : { limit : Maybe Int, senders : SimpleFilter, types : SimpleFilter }
|
||||
} -> Filter
|
||||
fromSimpleFilter data =
|
||||
F.Filter
|
||||
{ accountData =
|
||||
F.EventFilter
|
||||
{ limit = Nothing
|
||||
, senders = SF.all
|
||||
, types = data.accountDataTypes
|
||||
}
|
||||
, presence =
|
||||
F.EventFilter
|
||||
{ limit = data.presence.limit
|
||||
, senders = data.presence.senders
|
||||
, types = data.presence.types
|
||||
}
|
||||
, room =
|
||||
F.RoomFilter
|
||||
{ accountData =
|
||||
F.RoomEventFilter
|
||||
{ lazyLoadMembers = data.lazyLoadMembers
|
||||
, limit = Nothing
|
||||
, rooms = data.roomIds
|
||||
, senders = SF.all
|
||||
, types = data.accountDataTypes
|
||||
}
|
||||
, ephemeral =
|
||||
F.RoomEventFilter
|
||||
{ lazyLoadMembers = data.lazyLoadMembers
|
||||
, limit = data.ephemeral.limit
|
||||
, rooms = data.roomIds
|
||||
, senders = data.ephemeral.senders
|
||||
, types = data.ephemeral.types
|
||||
}
|
||||
, rooms = data.roomIds
|
||||
, timeline =
|
||||
F.RoomEventFilter
|
||||
{ lazyLoadMembers = data.lazyLoadMembers
|
||||
, limit = data.roomEvents.limit
|
||||
, rooms = data.roomIds
|
||||
, senders = data.roomEvents.senders
|
||||
, types = data.roomEvents.types
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
{-| Get the intersection of two filters.
|
||||
-}
|
||||
intersect : Filter -> Filter -> Filter
|
||||
intersect =
|
||||
F.intersectFilter
|
|
@ -0,0 +1,172 @@
|
|||
module Internal.Tools.Filters.SimpleFilter exposing (..)
|
||||
{-| The SimpleFilter tracks values that should or should not be monitored.
|
||||
-}
|
||||
|
||||
import Dict exposing (Dict)
|
||||
|
||||
{-| SimpleFilter type that tracks items to include or exclude.
|
||||
-}
|
||||
type alias SimpleFilter a =
|
||||
{ specificOnes : Dict a Bool
|
||||
, includeOthers : Bool
|
||||
}
|
||||
|
||||
all : SimpleFilter a
|
||||
all =
|
||||
{ specificOnes = Dict.empty
|
||||
, includeOthers = True
|
||||
}
|
||||
|
||||
{-| Use filter ones that are only available in the first filter.
|
||||
-}
|
||||
diff : SimpleFilter comparable -> SimpleFilter comparable -> SimpleFilter comparable
|
||||
diff f1 f2 =
|
||||
{ specificOnes =
|
||||
Dict.merge
|
||||
(\k v1 -> Dict.insert k (v1 && not f2.includeOthers))
|
||||
(\k v1 v2 -> Dict.insert k (v1 && not v2))
|
||||
(\k v2 -> Dict.insert k (f1.includeOthers && not v2))
|
||||
f1.specificOnes
|
||||
f2.specificOnes
|
||||
Dict.empty
|
||||
, includeOthers = f1.includeOthers && not f2.includeOthers
|
||||
}
|
||||
|
||||
{-| Form a filter that only shows the values that two filters have in common.
|
||||
-}
|
||||
intersect : SimpleFilter comparable -> SimpleFilter comparable -> SimpleFilter comparable
|
||||
intersect f1 f2 =
|
||||
{ specificOnes =
|
||||
Dict.merge
|
||||
(\key v1 -> Dict.insert key (v1 && f2.includeOthers))
|
||||
(\key v1 v2 -> Dict.insert key (v1 && v2))
|
||||
(\key v2 -> Dict.insert key (f1.includeOthers && v2))
|
||||
f1.specificOnes
|
||||
f2.specificOnes
|
||||
Dict.empty
|
||||
, includeOthers = f1.includeOthers && f2.includeOthers
|
||||
}
|
||||
|
||||
{-| Start with a filter that includes none.
|
||||
-}
|
||||
none : SimpleFilter a
|
||||
none =
|
||||
{ specificOnes = Dict.empty
|
||||
, includeOthers = False
|
||||
}
|
||||
|
||||
{-| Check whether a SimpleFilter is a subset of another filter.
|
||||
-}
|
||||
subset : SimpleFilter comparable -> SimpleFilter comparable -> Bool
|
||||
subset small large =
|
||||
if small.includeOthers && not large.includeOthers then
|
||||
False
|
||||
else
|
||||
-- All elements of small are in large
|
||||
Dict.merge
|
||||
(\_ s ->
|
||||
if s && not large.includeOthers then
|
||||
always False
|
||||
else
|
||||
identity
|
||||
)
|
||||
(\_ s l ->
|
||||
if s && not l then
|
||||
always False
|
||||
else
|
||||
identity
|
||||
)
|
||||
(\_ l ->
|
||||
if small.includeOthers && not l then
|
||||
always False
|
||||
else
|
||||
identity
|
||||
)
|
||||
small.specificOnes
|
||||
large.specificOnes
|
||||
True
|
||||
|
||||
{-| Encode a SimpleFilter into a list of items to exclude.
|
||||
-}
|
||||
toExclude : SimpleFilter comparable -> Maybe (List comparable)
|
||||
toExclude f =
|
||||
f.specificOnes
|
||||
|> Dict.filter (always not)
|
||||
|> Dict.keys
|
||||
|> Just
|
||||
|
||||
{-| Encode a SimpleFilter into a list of items to include.
|
||||
-}
|
||||
toInclude : SimpleFilter comparable -> Maybe (List comparable)
|
||||
toInclude f =
|
||||
if f.includeOthers then
|
||||
Nothing
|
||||
else
|
||||
f.specificOnes
|
||||
|> Dict.filter (always identity)
|
||||
|> Dict.keys
|
||||
|> Just
|
||||
|
||||
{-| Create a SimpleFilter out of two optionally present lists.
|
||||
-}
|
||||
toSimpleFilter : Maybe (List comparable) -> Maybe (List comparable) -> SimpleFilter comparable
|
||||
toSimpleFilter these notThese =
|
||||
let
|
||||
no : List comparable
|
||||
no = Maybe.withDefault [] notThese
|
||||
in
|
||||
case these of
|
||||
Just yes ->
|
||||
{ specificOnes =
|
||||
Dict.union
|
||||
(Dict.fromList ( List.map (\x -> Tuple.pair x False) no ))
|
||||
(Dict.fromList ( List.map (\x -> Tuple.pair x True) yes ))
|
||||
, includeOthers = False
|
||||
}
|
||||
|
||||
Nothing ->
|
||||
{ specificOnes =
|
||||
no
|
||||
|> List.map (\x -> Tuple.pair x False)
|
||||
|> Dict.fromList
|
||||
, includeOthers = True
|
||||
}
|
||||
|
||||
{-| Form a filter that includes values if it is included in either filters.
|
||||
-}
|
||||
union : SimpleFilter comparable -> SimpleFilter comparable -> SimpleFilter comparable
|
||||
union f1 f2 =
|
||||
{ specificOnes =
|
||||
Dict.merge
|
||||
(\key v1 -> Dict.insert key (v1 || f2.includeOthers))
|
||||
(\key v1 v2 -> Dict.insert key (v1 || v2))
|
||||
(\key v2 -> Dict.insert key (f1.includeOthers || v2))
|
||||
f1.specificOnes
|
||||
f2.specificOnes
|
||||
Dict.empty
|
||||
, includeOthers = f1.includeOthers && f2.includeOthers
|
||||
}
|
||||
|
||||
{-| Add a value that should be included.
|
||||
-}
|
||||
with : comparable -> SimpleFilter comparable -> SimpleFilter comparable
|
||||
with x f =
|
||||
{ f | specificOnes = Dict.insert x True f.specificOnes }
|
||||
|
||||
{-| Include all values that haven't been mentioned.
|
||||
-}
|
||||
withOthers : SimpleFilter comparable -> SimpleFilter comparable
|
||||
withOthers f =
|
||||
{ f | includeOthers = True }
|
||||
|
||||
{-| Add a value that should be ignored.
|
||||
-}
|
||||
without : comparable -> SimpleFilter comparable -> SimpleFilter comparable
|
||||
without x f =
|
||||
{ f | specificOnes = Dict.insert x False f.specificOnes }
|
||||
|
||||
{-| Ignore all values that haven't been mentioned.
|
||||
-}
|
||||
withoutOthers : SimpleFilter comparable -> SimpleFilter comparable
|
||||
withoutOthers f =
|
||||
{ f | includeOthers = False }
|
|
@ -0,0 +1,246 @@
|
|||
module Internal.Tools.Filters.SpecObjects exposing
|
||||
( EventFilter
|
||||
, Filter
|
||||
, RoomEventFilter
|
||||
, RoomFilter
|
||||
, StateFilter
|
||||
, encodeEventFilter
|
||||
, encodeFilter
|
||||
, encodeRoomEventFilter
|
||||
, encodeRoomFilter
|
||||
, encodeStateFilter
|
||||
, eventFilterDecoder
|
||||
, filterDecoder
|
||||
, roomEventFilterDecoder
|
||||
, roomFilterDecoder
|
||||
, stateFilterDecoder
|
||||
)
|
||||
|
||||
{-| Automatically generated 'SpecObjects'
|
||||
|
||||
Last generated at Unix time 1681915222
|
||||
|
||||
-}
|
||||
|
||||
import Internal.Tools.DecodeExtra as D exposing (opField, opFieldWithDefault)
|
||||
import Internal.Tools.EncodeExtra exposing (maybeObject)
|
||||
import Internal.Tools.SpecEnums as Enums
|
||||
import Json.Decode as D
|
||||
import Json.Encode as E
|
||||
|
||||
|
||||
{-| Filter that describes which events to include/exclude.
|
||||
-}
|
||||
type alias EventFilter =
|
||||
{ limit : Maybe Int
|
||||
, notSenders : Maybe (List String)
|
||||
, notTypes : Maybe (List String)
|
||||
, senders : Maybe (List String)
|
||||
, types : Maybe (List String)
|
||||
}
|
||||
|
||||
|
||||
encodeEventFilter : EventFilter -> E.Value
|
||||
encodeEventFilter data =
|
||||
maybeObject
|
||||
[ ( "limit", Maybe.map E.int data.limit )
|
||||
, ( "not_senders", Maybe.map (E.list E.string) data.notSenders )
|
||||
, ( "not_types", Maybe.map (E.list E.string) data.notTypes )
|
||||
, ( "senders", Maybe.map (E.list E.string) data.senders )
|
||||
, ( "types", Maybe.map (E.list E.string) data.types )
|
||||
]
|
||||
|
||||
|
||||
eventFilterDecoder : D.Decoder EventFilter
|
||||
eventFilterDecoder =
|
||||
D.map5
|
||||
(\a b c d e ->
|
||||
{ limit = a, notSenders = b, notTypes = c, senders = d, types = e }
|
||||
)
|
||||
(opField "limit" D.int)
|
||||
(opField "not_senders" (D.list D.string))
|
||||
(opField "not_types" (D.list D.string))
|
||||
(opField "senders" (D.list D.string))
|
||||
(opField "types" (D.list D.string))
|
||||
|
||||
|
||||
{-| Main filter for filtering results
|
||||
-}
|
||||
type alias Filter =
|
||||
{ accountData : Maybe EventFilter
|
||||
, eventFields : Maybe (List String)
|
||||
, eventFormat : Enums.EventFormat
|
||||
, presence : Maybe EventFilter
|
||||
, room : Maybe RoomFilter
|
||||
}
|
||||
|
||||
|
||||
encodeFilter : Filter -> E.Value
|
||||
encodeFilter data =
|
||||
maybeObject
|
||||
[ ( "account_data", Maybe.map encodeEventFilter data.accountData )
|
||||
, ( "event_fields", Maybe.map (E.list E.string) data.eventFields )
|
||||
, ( "event_format", Just <| Enums.encodeEventFormat data.eventFormat )
|
||||
, ( "presence", Maybe.map encodeEventFilter data.presence )
|
||||
, ( "room", Maybe.map encodeRoomFilter data.room )
|
||||
]
|
||||
|
||||
|
||||
filterDecoder : D.Decoder Filter
|
||||
filterDecoder =
|
||||
D.map5
|
||||
(\a b c d e ->
|
||||
{ accountData = a, eventFields = b, eventFormat = c, presence = d, room = e }
|
||||
)
|
||||
(opField "account_data" eventFilterDecoder)
|
||||
(opField "event_fields" (D.list D.string))
|
||||
(opFieldWithDefault "event_format" Enums.Client Enums.eventFormatDecoder)
|
||||
(opField "presence" eventFilterDecoder)
|
||||
(opField "room" roomFilterDecoder)
|
||||
|
||||
|
||||
{-| Filter that describes which events to include/exclude in a Matrix room.
|
||||
-}
|
||||
type alias RoomEventFilter =
|
||||
{ containsUrl : Maybe Bool
|
||||
, includeRedundantMembers : Bool
|
||||
, lazyLoadMembers : Bool
|
||||
, limit : Maybe Int
|
||||
, notRooms : Maybe (List String)
|
||||
, notSenders : Maybe (List String)
|
||||
, notTypes : Maybe (List String)
|
||||
, rooms : Maybe (List String)
|
||||
, senders : Maybe (List String)
|
||||
, types : Maybe (List String)
|
||||
, unreadThreadNotifications : Bool
|
||||
}
|
||||
|
||||
|
||||
encodeRoomEventFilter : RoomEventFilter -> E.Value
|
||||
encodeRoomEventFilter data =
|
||||
maybeObject
|
||||
[ ( "contains_url", Maybe.map E.bool data.containsUrl )
|
||||
, ( "include_redundant_members", Just <| E.bool data.includeRedundantMembers )
|
||||
, ( "lazy_load_members", Just <| E.bool data.lazyLoadMembers )
|
||||
, ( "limit", Maybe.map E.int data.limit )
|
||||
, ( "not_rooms", Maybe.map (E.list E.string) data.notRooms )
|
||||
, ( "not_senders", Maybe.map (E.list E.string) data.notSenders )
|
||||
, ( "not_types", Maybe.map (E.list E.string) data.notTypes )
|
||||
, ( "rooms", Maybe.map (E.list E.string) data.rooms )
|
||||
, ( "senders", Maybe.map (E.list E.string) data.senders )
|
||||
, ( "types", Maybe.map (E.list E.string) data.types )
|
||||
, ( "unread_thread_notifications", Just <| E.bool data.unreadThreadNotifications )
|
||||
]
|
||||
|
||||
|
||||
roomEventFilterDecoder : D.Decoder RoomEventFilter
|
||||
roomEventFilterDecoder =
|
||||
D.map11
|
||||
(\a b c d e f g h i j k ->
|
||||
{ containsUrl = a, includeRedundantMembers = b, lazyLoadMembers = c, limit = d, notRooms = e, notSenders = f, notTypes = g, rooms = h, senders = i, types = j, unreadThreadNotifications = k }
|
||||
)
|
||||
(opField "contains_url" D.bool)
|
||||
(opFieldWithDefault "include_redundant_members" False D.bool)
|
||||
(opFieldWithDefault "lazy_load_members" False D.bool)
|
||||
(opField "limit" D.int)
|
||||
(opField "not_rooms" (D.list D.string))
|
||||
(opField "not_senders" (D.list D.string))
|
||||
(opField "not_types" (D.list D.string))
|
||||
(opField "rooms" (D.list D.string))
|
||||
(opField "senders" (D.list D.string))
|
||||
(opField "types" (D.list D.string))
|
||||
(opFieldWithDefault "unread_thread_notifications" False D.bool)
|
||||
|
||||
|
||||
{-| Filter that describes what should and shouldn't be included for rooms.
|
||||
-}
|
||||
type alias RoomFilter =
|
||||
{ accountData : Maybe RoomEventFilter
|
||||
, ephemeral : Maybe RoomEventFilter
|
||||
, includeLeave : Bool
|
||||
, notRooms : Maybe (List String)
|
||||
, rooms : Maybe (List String)
|
||||
, state : Maybe StateFilter
|
||||
, timeline : Maybe RoomEventFilter
|
||||
}
|
||||
|
||||
|
||||
encodeRoomFilter : RoomFilter -> E.Value
|
||||
encodeRoomFilter data =
|
||||
maybeObject
|
||||
[ ( "account_data", Maybe.map encodeRoomEventFilter data.accountData )
|
||||
, ( "ephemeral", Maybe.map encodeRoomEventFilter data.ephemeral )
|
||||
, ( "include_leave", Just <| E.bool data.includeLeave )
|
||||
, ( "not_rooms", Maybe.map (E.list E.string) data.notRooms )
|
||||
, ( "rooms", Maybe.map (E.list E.string) data.rooms )
|
||||
, ( "state", Maybe.map encodeStateFilter data.state )
|
||||
, ( "timeline", Maybe.map encodeRoomEventFilter data.timeline )
|
||||
]
|
||||
|
||||
|
||||
roomFilterDecoder : D.Decoder RoomFilter
|
||||
roomFilterDecoder =
|
||||
D.map7
|
||||
(\a b c d e f g ->
|
||||
{ accountData = a, ephemeral = b, includeLeave = c, notRooms = d, rooms = e, state = f, timeline = g }
|
||||
)
|
||||
(opField "account_data" roomEventFilterDecoder)
|
||||
(opField "ephemeral" roomEventFilterDecoder)
|
||||
(opFieldWithDefault "include_leave" False D.bool)
|
||||
(opField "not_rooms" (D.list D.string))
|
||||
(opField "rooms" (D.list D.string))
|
||||
(opField "state" stateFilterDecoder)
|
||||
(opField "timeline" roomEventFilterDecoder)
|
||||
|
||||
|
||||
{-| Filter that describes which events to include/exclude in a Matrix room.
|
||||
-}
|
||||
type alias StateFilter =
|
||||
{ containsUrl : Maybe Bool
|
||||
, includeRedundantMembers : Bool
|
||||
, lazyLoadMembers : Bool
|
||||
, limit : Maybe Int
|
||||
, notRooms : Maybe (List String)
|
||||
, notSenders : Maybe (List String)
|
||||
, notTypes : Maybe (List String)
|
||||
, rooms : Maybe (List String)
|
||||
, senders : Maybe (List String)
|
||||
, types : Maybe (List String)
|
||||
, unreadThreadNotifications : Bool
|
||||
}
|
||||
|
||||
|
||||
encodeStateFilter : StateFilter -> E.Value
|
||||
encodeStateFilter data =
|
||||
maybeObject
|
||||
[ ( "contains_url", Maybe.map E.bool data.containsUrl )
|
||||
, ( "include_redundant_members", Just <| E.bool data.includeRedundantMembers )
|
||||
, ( "lazy_load_members", Just <| E.bool data.lazyLoadMembers )
|
||||
, ( "limit", Maybe.map E.int data.limit )
|
||||
, ( "not_rooms", Maybe.map (E.list E.string) data.notRooms )
|
||||
, ( "not_senders", Maybe.map (E.list E.string) data.notSenders )
|
||||
, ( "not_types", Maybe.map (E.list E.string) data.notTypes )
|
||||
, ( "rooms", Maybe.map (E.list E.string) data.rooms )
|
||||
, ( "senders", Maybe.map (E.list E.string) data.senders )
|
||||
, ( "types", Maybe.map (E.list E.string) data.types )
|
||||
, ( "unread_thread_notifications", Just <| E.bool data.unreadThreadNotifications )
|
||||
]
|
||||
|
||||
|
||||
stateFilterDecoder : D.Decoder StateFilter
|
||||
stateFilterDecoder =
|
||||
D.map11
|
||||
(\a b c d e f g h i j k ->
|
||||
{ containsUrl = a, includeRedundantMembers = b, lazyLoadMembers = c, limit = d, notRooms = e, notSenders = f, notTypes = g, rooms = h, senders = i, types = j, unreadThreadNotifications = k }
|
||||
)
|
||||
(opField "contains_url" D.bool)
|
||||
(opFieldWithDefault "include_redundant_members" False D.bool)
|
||||
(opFieldWithDefault "lazy_load_members" False D.bool)
|
||||
(opField "limit" D.int)
|
||||
(opField "not_rooms" (D.list D.string))
|
||||
(opField "not_senders" (D.list D.string))
|
||||
(opField "not_types" (D.list D.string))
|
||||
(opField "rooms" (D.list D.string))
|
||||
(opField "senders" (D.list D.string))
|
||||
(opField "types" (D.list D.string))
|
||||
(opFieldWithDefault "unread_thread_notifications" False D.bool)
|
|
@ -0,0 +1,135 @@
|
|||
version: v1
|
||||
name: SpecObjects
|
||||
objects:
|
||||
Filter:
|
||||
description: Main filter for filtering results
|
||||
fields:
|
||||
account_data:
|
||||
type: EventFilter
|
||||
required: false
|
||||
event_fields:
|
||||
type: '[string]'
|
||||
required: false
|
||||
event_format:
|
||||
type: Enums.EventFormat
|
||||
default: Enums.Client
|
||||
presence:
|
||||
type: EventFilter
|
||||
required: false
|
||||
room:
|
||||
type: RoomFilter
|
||||
required: false
|
||||
EventFilter:
|
||||
description: Filter that describes which events to include/exclude.
|
||||
fields:
|
||||
limit:
|
||||
type: int
|
||||
required: false
|
||||
not_senders:
|
||||
type: '[string]'
|
||||
required: false
|
||||
not_types:
|
||||
type: '[string]'
|
||||
required: false
|
||||
senders:
|
||||
type: '[string]'
|
||||
required: false
|
||||
types:
|
||||
type: '[string]'
|
||||
required: false
|
||||
RoomFilter:
|
||||
description: Filter that describes what should and shouldn't be included for rooms.
|
||||
fields:
|
||||
account_data:
|
||||
type: RoomEventFilter
|
||||
required: false
|
||||
ephemeral:
|
||||
type: RoomEventFilter
|
||||
required: false
|
||||
include_leave:
|
||||
type: bool
|
||||
default: 'False'
|
||||
not_rooms:
|
||||
type: '[string]'
|
||||
required: false
|
||||
rooms:
|
||||
type: '[string]'
|
||||
required: false
|
||||
state:
|
||||
type: StateFilter
|
||||
required: false
|
||||
timeline:
|
||||
type: RoomEventFilter
|
||||
required: false
|
||||
RoomEventFilter:
|
||||
description: Filter that describes which events to include/exclude in a Matrix room.
|
||||
fields:
|
||||
contains_url:
|
||||
type: bool
|
||||
required: false
|
||||
include_redundant_members:
|
||||
type: bool
|
||||
default: 'False'
|
||||
lazy_load_members:
|
||||
type: bool
|
||||
default: 'False'
|
||||
limit:
|
||||
type: int
|
||||
required: false
|
||||
not_rooms:
|
||||
type: '[string]'
|
||||
required: false
|
||||
not_senders:
|
||||
type: '[string]'
|
||||
required: false
|
||||
not_types:
|
||||
type: '[string]'
|
||||
required: false
|
||||
rooms:
|
||||
type: '[string]'
|
||||
required: false
|
||||
senders:
|
||||
type: '[string]'
|
||||
required: false
|
||||
types:
|
||||
type: '[string]'
|
||||
required: false
|
||||
unread_thread_notifications:
|
||||
type: bool
|
||||
default: 'False'
|
||||
StateFilter:
|
||||
description: Filter that describes which events to include/exclude in a Matrix room.
|
||||
fields:
|
||||
contains_url:
|
||||
type: bool
|
||||
required: false
|
||||
include_redundant_members:
|
||||
type: bool
|
||||
default: 'False'
|
||||
lazy_load_members:
|
||||
type: bool
|
||||
default: 'False'
|
||||
limit:
|
||||
type: int
|
||||
required: false
|
||||
not_rooms:
|
||||
type: '[string]'
|
||||
required: false
|
||||
not_senders:
|
||||
type: '[string]'
|
||||
required: false
|
||||
not_types:
|
||||
type: '[string]'
|
||||
required: false
|
||||
rooms:
|
||||
type: '[string]'
|
||||
required: false
|
||||
senders:
|
||||
type: '[string]'
|
||||
required: false
|
||||
types:
|
||||
type: '[string]'
|
||||
required: false
|
||||
unread_thread_notifications:
|
||||
type: bool
|
||||
default: 'False'
|
|
@ -6,8 +6,10 @@ For example, this is used to store events by their event id, or store rooms by t
|
|||
|
||||
-}
|
||||
|
||||
import Dict exposing (Dict)
|
||||
|
||||
import FastDict as Dict exposing (Dict)
|
||||
import Json.Decode as D
|
||||
import Json.Encode as E
|
||||
import Hash exposing (Hash)
|
||||
|
||||
type Hashdict a
|
||||
= Hashdict
|
||||
|
@ -20,6 +22,12 @@ empty : (a -> String) -> Hashdict a
|
|||
empty hash =
|
||||
Hashdict { hash = hash, values = Dict.empty }
|
||||
|
||||
encode : Hashdict E.Value -> E.Value
|
||||
encode (Hashdict h) =
|
||||
h.values
|
||||
|> Dict.toList
|
||||
|> E.object
|
||||
|
||||
|
||||
fromList : (a -> String) -> List a -> Hashdict a
|
||||
fromList hash xs =
|
||||
|
@ -46,6 +54,9 @@ keys : Hashdict a -> List String
|
|||
keys (Hashdict h) =
|
||||
Dict.keys h.values
|
||||
|
||||
toList : Hashdict a -> List (String, a)
|
||||
toList (Hashdict h) =
|
||||
Dict.toList h.values
|
||||
|
||||
union : Hashdict a -> Hashdict a -> Hashdict a
|
||||
union (Hashdict h1) (Hashdict h2) =
|
||||
|
|
|
@ -0,0 +1,40 @@
|
|||
module Internal.Tools.Iddict exposing (..)
|
||||
{-| The id-dict stores values and gives them a unique id.
|
||||
-}
|
||||
|
||||
import FastDict as Dict exposing (Dict)
|
||||
|
||||
type Iddict a
|
||||
= Iddict
|
||||
{ cursor : Int
|
||||
, dict : Dict Int a
|
||||
}
|
||||
|
||||
empty : Iddict a
|
||||
empty =
|
||||
Iddict
|
||||
{ cursor = 0
|
||||
, dict = Dict.empty
|
||||
}
|
||||
|
||||
get : Int -> Iddict a -> Maybe a
|
||||
get k (Iddict { dict }) =
|
||||
Dict.get k dict
|
||||
|
||||
insert : a -> Iddict a -> (Int, Iddict a)
|
||||
insert v (Iddict d) =
|
||||
( d.cursor
|
||||
, Iddict { cursor = d.cursor + 1, dict = Dict.insert d.cursor v d.dict }
|
||||
)
|
||||
|
||||
keys : Iddict a -> List Int
|
||||
keys (Iddict { dict }) =
|
||||
Dict.keys dict
|
||||
|
||||
remove : Int -> Iddict a -> Iddict a
|
||||
remove k (Iddict d) =
|
||||
Iddict { d | dict = Dict.remove k d.dict }
|
||||
|
||||
values : Iddict a -> List a
|
||||
values (Iddict { dict }) =
|
||||
Dict.values dict
|
|
@ -0,0 +1,14 @@
|
|||
module Internal.Tools.VaultResult exposing (..)
|
||||
|
||||
import Internal.Tools.Exceptions as X
|
||||
import Task exposing (Task)
|
||||
|
||||
|
||||
type Info b a
|
||||
= Info a
|
||||
| NoInfo
|
||||
| InfoFailed { status : LoadingError, retry : Task X.Error b }
|
||||
|
||||
|
||||
type LoadingError
|
||||
= NeverRequested
|
|
@ -1,282 +1,333 @@
|
|||
module Internal.Values.Timeline exposing (..)
|
||||
|
||||
{-| This module shapes the Timeline type used to keep track of timelines in Matrix rooms.
|
||||
{-| The Timeline can be very complex, and it can be represented in surprisingly
|
||||
complex manners. This module aims to provide one single Timeline type that
|
||||
accepts the complex pieces of information from the API and contain it all in
|
||||
a simple way to view events.
|
||||
-}
|
||||
|
||||
import Internal.Config.Leaking as Leaking
|
||||
import Internal.Tools.Fold as Fold
|
||||
import Internal.Values.Event as Event exposing (IEvent)
|
||||
import Internal.Values.StateManager as StateManager exposing (StateManager)
|
||||
import FastDict as Dict exposing (Dict)
|
||||
import Internal.Tools.Iddict as Iddict exposing (Iddict)
|
||||
import Internal.Tools.Filters.Main as Filter exposing (Filter)
|
||||
import Internal.Tools.Iddict as Iddict
|
||||
|
||||
{-| A batch is a piece of the timeline that can be used to update the timeline.
|
||||
-}
|
||||
type Batch
|
||||
= BatchToken TokenValue (List TokenValue)
|
||||
| BatchSlice Batch Filter EventId (List EventId) TokenValue (List TokenValue)
|
||||
|
||||
type Timeline
|
||||
= Timeline
|
||||
{ prevBatch : String
|
||||
, nextBatch : String
|
||||
, events : List IEvent
|
||||
, stateAtStart : StateManager
|
||||
, previous : BeforeTimeline
|
||||
{-| An event id is a raw value provided by the Matrix API. It points to an event
|
||||
that is being stored elsewhere in the Matrix vault.
|
||||
-}
|
||||
type alias EventId = String
|
||||
|
||||
{-| A TokenValue is a raw value provided by the Matrix API. It is an opaque
|
||||
value which indicates a point in the timeline and provides no other information.
|
||||
-}
|
||||
type alias TokenValue = String
|
||||
|
||||
{-| Central data type in the room.
|
||||
-}
|
||||
type alias Timeline =
|
||||
{ mostRecentToken : TokenId
|
||||
, slices : Iddict Slice
|
||||
, tokenToId : Dict TokenValue TokenId
|
||||
, tokens : Iddict Token
|
||||
}
|
||||
|
||||
{-| Pointer to a specific token.
|
||||
-}
|
||||
type TokenId = TokenId Int
|
||||
|
||||
{-| Pointer to a specific slice on the timeline.
|
||||
-}
|
||||
type SliceId = SliceId Int
|
||||
|
||||
{-| Information of a specific slice on the timeline.
|
||||
-}
|
||||
type Slice
|
||||
= Slice
|
||||
{ filter : Filter
|
||||
, head : EventId
|
||||
, next : List TokenId
|
||||
, previous : List TokenId
|
||||
, tail : List EventId
|
||||
}
|
||||
|
||||
|
||||
type BeforeTimeline
|
||||
= Endless String
|
||||
| Gap Timeline
|
||||
| StartOfTimeline
|
||||
|
||||
|
||||
{-| Add a new batch of events to the front of the timeline.
|
||||
{-| Information on a token, which is a point on the timeline. It might have
|
||||
multiple TokenValue types point to it.
|
||||
-}
|
||||
addNewEvents :
|
||||
{ events : List IEvent
|
||||
, limited : Bool
|
||||
, nextBatch : String
|
||||
, prevBatch : String
|
||||
, stateDelta : Maybe StateManager
|
||||
}
|
||||
-> Timeline
|
||||
-> Timeline
|
||||
addNewEvents { events, limited, nextBatch, prevBatch, stateDelta } (Timeline t) =
|
||||
Timeline
|
||||
(if prevBatch == t.nextBatch || not limited then
|
||||
{ t
|
||||
| events = t.events ++ events
|
||||
, nextBatch = nextBatch
|
||||
}
|
||||
|
||||
else
|
||||
{ prevBatch = prevBatch
|
||||
, nextBatch = nextBatch
|
||||
, events = events
|
||||
, stateAtStart =
|
||||
t
|
||||
|> Timeline
|
||||
|> mostRecentState
|
||||
|> StateManager.updateRoomStateWith
|
||||
(stateDelta
|
||||
|> Maybe.withDefault StateManager.empty
|
||||
)
|
||||
, previous = Gap (Timeline t)
|
||||
}
|
||||
)
|
||||
|
||||
|
||||
{-| Create a new timeline.
|
||||
-}
|
||||
newFromEvents :
|
||||
{ events : List IEvent
|
||||
, nextBatch : String
|
||||
, prevBatch : Maybe String
|
||||
, stateDelta : Maybe StateManager
|
||||
}
|
||||
-> Timeline
|
||||
newFromEvents { events, nextBatch, prevBatch, stateDelta } =
|
||||
Timeline
|
||||
{ events = events
|
||||
, nextBatch = nextBatch
|
||||
, prevBatch =
|
||||
prevBatch
|
||||
|> Maybe.withDefault Leaking.prevBatch
|
||||
, previous =
|
||||
prevBatch
|
||||
|> Maybe.map Endless
|
||||
|> Maybe.withDefault StartOfTimeline
|
||||
, stateAtStart =
|
||||
stateDelta
|
||||
|> Maybe.withDefault StateManager.empty
|
||||
type Token
|
||||
= Token
|
||||
{ next : List SliceId
|
||||
, previous : List SliceId
|
||||
, head : TokenValue
|
||||
, tail : List TokenValue
|
||||
}
|
||||
|
||||
{-| Add a new batch to the timeline. Tokens that already existed, are ignored
|
||||
but connected to the slices.
|
||||
|
||||
{-| Insert events starting from a known batch token.
|
||||
The function returns token ids to where the batch starts and ends, as well as
|
||||
the renewed timeline.
|
||||
-}
|
||||
insertEvents :
|
||||
{ events : List IEvent
|
||||
, nextBatch : String
|
||||
, prevBatch : Maybe String
|
||||
, stateDelta : Maybe StateManager
|
||||
}
|
||||
-> Timeline
|
||||
-> Timeline
|
||||
insertEvents ({ events, nextBatch, prevBatch, stateDelta } as data) (Timeline t) =
|
||||
Timeline
|
||||
(case prevBatch of
|
||||
-- No prevbatch suggests the start of the timeline.
|
||||
-- This means that we must recurse until we've hit the bottom,
|
||||
-- and then mark the bottom of the timeline.
|
||||
Nothing ->
|
||||
case t.previous of
|
||||
Gap prevT ->
|
||||
{ t
|
||||
| previous =
|
||||
prevT
|
||||
|> insertEvents data
|
||||
|> Gap
|
||||
}
|
||||
|
||||
_ ->
|
||||
if nextBatch == t.prevBatch then
|
||||
{ t | previous = StartOfTimeline, events = events ++ t.events, stateAtStart = StateManager.empty }
|
||||
|
||||
else
|
||||
{ t | previous = Gap <| newFromEvents data }
|
||||
|
||||
-- If there is a prevbatch, it is not the start of the timeline
|
||||
-- and could be located anywhere.
|
||||
-- Starting at the front, look for a way to match it with the existing timeline.
|
||||
Just p ->
|
||||
-- Piece connects to the front of the timeline.
|
||||
if t.nextBatch == p then
|
||||
{ t
|
||||
| events = t.events ++ events
|
||||
, nextBatch = nextBatch
|
||||
}
|
||||
-- Piece connects to the back of the timeline.
|
||||
|
||||
else if nextBatch == t.prevBatch then
|
||||
case t.previous of
|
||||
Gap (Timeline prevT) ->
|
||||
-- Piece also connects to the timeline in the back,
|
||||
-- allowing the two timelines to merge.
|
||||
if prevT.nextBatch == p then
|
||||
{ events = prevT.events ++ events ++ t.events
|
||||
, nextBatch = t.nextBatch
|
||||
, prevBatch = prevT.prevBatch
|
||||
, stateAtStart = prevT.stateAtStart
|
||||
, previous = prevT.previous
|
||||
addBatch : Batch -> Timeline -> { start : TokenId, end : TokenId, timeline : Timeline }
|
||||
addBatch batch timeline =
|
||||
case batch of
|
||||
BatchToken head tail ->
|
||||
case addToken (Token { next = [], previous = [], head = head, tail = tail }) timeline of
|
||||
( tokenId, newTimeline ) ->
|
||||
{ start = tokenId, end = tokenId, timeline = newTimeline }
|
||||
|
||||
BatchSlice b filter sHead sTail tHead tTail ->
|
||||
case addBatch b timeline of
|
||||
newBatch ->
|
||||
let
|
||||
slice : Slice
|
||||
slice =
|
||||
Slice
|
||||
{ filter = filter
|
||||
, head = sHead
|
||||
, next = []
|
||||
, previous = []
|
||||
, tail = sTail
|
||||
}
|
||||
|
||||
token : Token
|
||||
token =
|
||||
Token
|
||||
{ next = []
|
||||
, previous = []
|
||||
, head = tHead
|
||||
, tail = tTail
|
||||
}
|
||||
in
|
||||
case newBatch.timeline |> insertSlice slice |> Tuple.mapSecond (addToken token) of
|
||||
( sliceId, ( tokenId, newTimeline ) ) ->
|
||||
{ start = newBatch.start
|
||||
, end = tokenId
|
||||
, timeline =
|
||||
newTimeline
|
||||
|> connectToSlice newBatch.end sliceId
|
||||
|> connectToToken sliceId tokenId
|
||||
}
|
||||
|
||||
else
|
||||
{ t
|
||||
| events = events ++ t.events
|
||||
, prevBatch = p
|
||||
, stateAtStart =
|
||||
stateDelta
|
||||
|> Maybe.withDefault StateManager.empty
|
||||
}
|
||||
|
||||
Endless _ ->
|
||||
{ t
|
||||
| events = events ++ t.events
|
||||
, prevBatch = p
|
||||
, stateAtStart =
|
||||
stateDelta
|
||||
|> Maybe.withDefault StateManager.empty
|
||||
, previous = Endless p
|
||||
}
|
||||
|
||||
_ ->
|
||||
{ t
|
||||
| events = events ++ t.events
|
||||
, prevBatch = p
|
||||
, stateAtStart =
|
||||
stateDelta
|
||||
|> Maybe.withDefault StateManager.empty
|
||||
}
|
||||
-- Piece doesn't connect to this piece of the timeline.
|
||||
-- Consequently, look for previous parts of the timeline to see if it connects.
|
||||
|
||||
else
|
||||
case t.previous of
|
||||
Gap prevT ->
|
||||
{ t
|
||||
| previous =
|
||||
prevT
|
||||
|> insertEvents data
|
||||
|> Gap
|
||||
}
|
||||
|
||||
_ ->
|
||||
t
|
||||
)
|
||||
|
||||
|
||||
{-| Get the width of the latest gap. This data is usually accessed when trying to get more messages.
|
||||
{-| Add a new token to the timeline. If it already exists, this function does
|
||||
nothing and instead returns the existing token id.
|
||||
-}
|
||||
latestGap : Timeline -> Maybe { from : Maybe String, to : String }
|
||||
latestGap (Timeline t) =
|
||||
case t.previous of
|
||||
StartOfTimeline ->
|
||||
Nothing
|
||||
|
||||
Endless prevBatch ->
|
||||
Just { from = Nothing, to = prevBatch }
|
||||
|
||||
Gap (Timeline pt) ->
|
||||
Just { from = Just pt.nextBatch, to = t.prevBatch }
|
||||
|
||||
|
||||
{-| Get the longest uninterrupted length of most recent events.
|
||||
-}
|
||||
localSize : Timeline -> Int
|
||||
localSize =
|
||||
mostRecentEvents >> List.length
|
||||
|
||||
|
||||
{-| Get a list of the most recent events recorded.
|
||||
-}
|
||||
mostRecentEvents : Timeline -> List IEvent
|
||||
mostRecentEvents (Timeline t) =
|
||||
t.events
|
||||
|
||||
|
||||
{-| Get the needed `since` parameter to get the latest events.
|
||||
-}
|
||||
nextSyncToken : Timeline -> String
|
||||
nextSyncToken (Timeline t) =
|
||||
t.nextBatch
|
||||
|
||||
|
||||
{-| Get the state of the room after the most recent event.
|
||||
-}
|
||||
mostRecentState : Timeline -> StateManager
|
||||
mostRecentState (Timeline t) =
|
||||
t.stateAtStart
|
||||
|> StateManager.updateRoomStateWith
|
||||
(StateManager.fromEventList t.events)
|
||||
|
||||
|
||||
{-| Get the timeline's room state at any given event. The function returns `Nothing` if the event is not found in the timeline.
|
||||
-}
|
||||
stateAtEvent : IEvent -> Timeline -> Maybe StateManager
|
||||
stateAtEvent event (Timeline t) =
|
||||
if
|
||||
t.events
|
||||
|> List.map Event.eventId
|
||||
|> List.member (Event.eventId event)
|
||||
then
|
||||
Fold.untilCompleted
|
||||
List.foldl
|
||||
(\e ->
|
||||
StateManager.addEvent e
|
||||
>> (if Event.eventId e == Event.eventId event then
|
||||
Fold.AnswerWith
|
||||
|
||||
else
|
||||
Fold.ContinueWith
|
||||
)
|
||||
addToken : Token -> Timeline -> ( TokenId, Timeline )
|
||||
addToken ((Token { head, tail }) as token) timeline =
|
||||
case getTokenIdFromToken token timeline of
|
||||
Just tokenId ->
|
||||
( tokenId
|
||||
, mapToken tokenId
|
||||
(\(Token tk) ->
|
||||
case mergeUnique ( head, tail ) ( tk.head, tk.tail ) of
|
||||
( h, t ) ->
|
||||
Token { tk | head = h, tail = t }
|
||||
)
|
||||
timeline
|
||||
)
|
||||
t.stateAtStart
|
||||
t.events
|
||||
|> Just
|
||||
|
||||
Nothing ->
|
||||
insertToken token timeline
|
||||
|
||||
else
|
||||
case t.previous of
|
||||
Gap prevT ->
|
||||
stateAtEvent event prevT
|
||||
{-| Sometimes two separate tokens point to the same location in the timeline.
|
||||
You can add a new token value as an alias to the timeline using this function.
|
||||
-}
|
||||
addTokenAlias : String -> String -> Timeline -> Timeline
|
||||
addTokenAlias old new timeline =
|
||||
case Dict.get old timeline.tokenToId of
|
||||
Just tokenId ->
|
||||
timeline
|
||||
-- Update the token
|
||||
|> mapToken
|
||||
tokenId
|
||||
(\(Token t) ->
|
||||
Token { t | head = new, tail = t.head :: t.tail }
|
||||
)
|
||||
-- Add a token pointer for the new value
|
||||
|> (\tl -> { tl | tokenToId = Dict.insert new tokenId tl.tokenToId })
|
||||
|
||||
Nothing ->
|
||||
timeline
|
||||
|
||||
_ ->
|
||||
Nothing
|
||||
|
||||
|
||||
{-| Count how many events the current timeline is storing.
|
||||
{-| Connect a slice to a token to its right. The connection is established in
|
||||
two directions.
|
||||
-}
|
||||
size : Timeline -> Int
|
||||
size (Timeline t) =
|
||||
(case t.previous of
|
||||
Gap prev ->
|
||||
size prev
|
||||
connectToToken : SliceId -> TokenId -> Timeline -> Timeline
|
||||
connectToToken ((SliceId sliceId) as s) ((TokenId tokenId) as t) timeline =
|
||||
{ timeline
|
||||
| slices =
|
||||
Iddict.map sliceId
|
||||
(\(Slice slice) ->
|
||||
if isConnectedToToken t slice.next then
|
||||
Slice slice
|
||||
else
|
||||
Slice { slice | next = t :: slice.next }
|
||||
)
|
||||
timeline.slices
|
||||
, tokens =
|
||||
Iddict.map tokenId
|
||||
(\(Token token) ->
|
||||
if isConnectedToSlice s token.previous then
|
||||
Token token
|
||||
else
|
||||
Token { token | previous = s :: token.previous }
|
||||
)
|
||||
timeline.tokens
|
||||
}
|
||||
|
||||
_ ->
|
||||
0
|
||||
)
|
||||
+ List.length t.events
|
||||
{-| Connect a token to a slice to its right. The connection is established in
|
||||
two directions.
|
||||
-}
|
||||
connectToSlice : TokenId -> SliceId -> Timeline -> Timeline
|
||||
connectToSlice ((TokenId tokenId) as t) ((SliceId sliceId) as s) timeline =
|
||||
{ timeline
|
||||
| slices =
|
||||
Iddict.map sliceId
|
||||
(\(Slice slice) ->
|
||||
if isConnectedToToken t slice.previous then
|
||||
Slice slice
|
||||
else
|
||||
Slice { slice | previous = t :: slice.previous }
|
||||
)
|
||||
timeline.slices
|
||||
, tokens =
|
||||
Iddict.map tokenId
|
||||
(\(Token token) ->
|
||||
if isConnectedToSlice s token.next then
|
||||
Token token
|
||||
else
|
||||
Token { token | next = s :: token.next }
|
||||
)
|
||||
timeline.tokens
|
||||
}
|
||||
|
||||
{-| Get an empty timeline.
|
||||
-}
|
||||
empty : Timeline
|
||||
empty =
|
||||
{ mostRecentToken = TokenId 0
|
||||
, slices = Iddict.empty
|
||||
, tokenToId = Dict.empty
|
||||
, tokens = Iddict.empty
|
||||
}
|
||||
|
||||
{-| Get a slice of events from the timeline.
|
||||
-}
|
||||
getSlice : SliceId -> Timeline -> Maybe Slice
|
||||
getSlice (SliceId key) { slices } =
|
||||
Iddict.get key slices
|
||||
|
||||
{-| Get a token based on its id.
|
||||
-}
|
||||
getTokenFromTokenId : TokenId -> Timeline -> Maybe Token
|
||||
getTokenFromTokenId (TokenId tokenId) timeline =
|
||||
Iddict.get tokenId timeline.tokens
|
||||
|
||||
{-| Get a token based on its token value.
|
||||
-}
|
||||
getTokenFromTokenValue : TokenValue -> Timeline -> Maybe Token
|
||||
getTokenFromTokenValue value timeline =
|
||||
timeline
|
||||
|> getTokenIdFromTokenValue value
|
||||
|> Maybe.andThen (\tid -> getTokenFromTokenId tid timeline)
|
||||
|
||||
{-| Get the token id based on a token value. The function returns Nothing if it
|
||||
isn't on the timeline.
|
||||
-}
|
||||
getTokenIdFromTokenValue : TokenValue -> Timeline -> Maybe TokenId
|
||||
getTokenIdFromTokenValue value timeline =
|
||||
Dict.get value timeline.tokenToId
|
||||
|
||||
{-| Get the token's id. The function returns Nothing if the token isn't on the
|
||||
timeline.
|
||||
-}
|
||||
getTokenIdFromToken : Token -> Timeline -> Maybe TokenId
|
||||
getTokenIdFromToken (Token { head, tail }) timeline =
|
||||
List.foldl
|
||||
(\value ptr ->
|
||||
case ptr of
|
||||
Nothing ->
|
||||
getTokenIdFromTokenValue value timeline
|
||||
|
||||
Just _ ->
|
||||
ptr
|
||||
)
|
||||
Nothing (head :: tail)
|
||||
|
||||
{-| Insert a new slice into the timeline. This is a raw operation that should
|
||||
never be exposed!
|
||||
-}
|
||||
insertSlice : Slice -> Timeline -> ( SliceId, Timeline )
|
||||
insertSlice slice timeline =
|
||||
timeline.slices
|
||||
|> Iddict.insert slice
|
||||
|> Tuple.mapBoth SliceId (\x -> { timeline | slices = x })
|
||||
|
||||
{-| Insert a new token into the timeline. This is a raw operation that should
|
||||
never be exposed!
|
||||
-}
|
||||
insertToken : Token -> Timeline -> ( TokenId, Timeline )
|
||||
insertToken ((Token { head }) as token) timeline =
|
||||
case Iddict.insert token timeline.tokens of
|
||||
( tokenId, tokens ) ->
|
||||
( TokenId tokenId
|
||||
, { timeline
|
||||
| tokenToId = Dict.insert head (TokenId tokenId) timeline.tokenToId
|
||||
, tokens = tokens
|
||||
}
|
||||
)
|
||||
|
||||
{-| Whether a list contains a given slice id.
|
||||
-}
|
||||
isConnectedToSlice : SliceId -> List SliceId -> Bool
|
||||
isConnectedToSlice (SliceId a) =
|
||||
List.any (\(SliceId b) -> a == b)
|
||||
|
||||
{-| Whether a list contains a given token id.
|
||||
-}
|
||||
isConnectedToToken : TokenId -> List TokenId -> Bool
|
||||
isConnectedToToken (TokenId a) =
|
||||
List.any (\(TokenId b) -> a == b)
|
||||
|
||||
{-| Update an existing slice based on its id.
|
||||
-}
|
||||
mapSlice : SliceId -> (Slice -> Slice) -> Timeline -> Timeline
|
||||
mapSlice (SliceId sliceId) f timeline =
|
||||
{ timeline | slices = Iddict.map sliceId f timeline.slices }
|
||||
|
||||
{-| Update an existing token based on its id.
|
||||
-}
|
||||
mapToken : TokenId -> (Token -> Token) -> Timeline -> Timeline
|
||||
mapToken (TokenId tokenId) f timeline =
|
||||
{ timeline | tokens = Iddict.map tokenId f timeline.tokens }
|
||||
|
||||
{-| Merge two lists such that each element only appears once.
|
||||
-}
|
||||
mergeUnique : (a, List a) -> (a, List a) -> (a, List a)
|
||||
mergeUnique (head, tail) (otherHead, otherTail) =
|
||||
otherTail
|
||||
|> List.filter (\e -> e /= otherHead)
|
||||
|> (::) otherHead
|
||||
|> List.filter (\e -> e /= head)
|
||||
|> List.filter (\e -> not <| List.member e tail )
|
||||
|> Tuple.pair head
|
||||
|
||||
{-| Turn a single slice into a batch.
|
||||
-}
|
||||
sliceToBatch : { start : String, filter : Filter, events : List EventId, end : String } -> Batch
|
||||
sliceToBatch { start, filter, events, end } =
|
||||
case events of
|
||||
[] ->
|
||||
BatchToken end [ start ]
|
||||
|
||||
head :: tail ->
|
||||
BatchSlice (BatchToken start []) filter head tail end []
|
||||
|
||||
{-| Turn a single token into a batch.
|
||||
-}
|
||||
tokenToBatch : String -> Batch
|
||||
tokenToBatch value =
|
||||
BatchToken value []
|
||||
|
|
|
@ -432,3 +432,8 @@ sync vault onResponse =
|
|||
rooms : Vault -> List Room.Room
|
||||
rooms =
|
||||
Snackbar.mapList Internal.getRooms
|
||||
|
||||
|
||||
settings : (Snackbar.Settings -> Snackbar.Settings) -> Vault -> Vault
|
||||
settings =
|
||||
Snackbar.updateSettings
|
||||
|
|
|
@ -33,10 +33,8 @@ interact with the API.
|
|||
import Internal.Api.VaultUpdate as Api
|
||||
import Internal.Invite exposing (RoomInvite)
|
||||
import Internal.Room exposing (Room)
|
||||
import Internal.Tools.Exceptions as X
|
||||
import Internal.Vault
|
||||
import Json.Encode as E
|
||||
import Task exposing (Task)
|
||||
|
||||
|
||||
{-| The Matrix API requires you to keep track of a lot of tokens, keys, values and more.
|
||||
|
@ -58,6 +56,13 @@ translate those instructions to a `VaultUpdate` that you can feed to your `Vault
|
|||
type alias VaultUpdate =
|
||||
Api.VaultUpdate
|
||||
|
||||
{-| After evaluating an update, the `Status` type tells you how the Vault.
|
||||
-}
|
||||
type Status
|
||||
= Good
|
||||
| Warning String ((VaultUpdate -> msg) -> Cmd msg)
|
||||
| Break String ((VaultUpdate -> msg) -> Cmd msg)
|
||||
|
||||
|
||||
{-| Create a new vault based on an access token.
|
||||
Keep in mind that access tokens might eventually be revoked or expire,
|
||||
|
|
|
@ -68,9 +68,7 @@ allowed for every room admin.
|
|||
import Internal.Api.VaultUpdate exposing (VaultUpdate)
|
||||
import Internal.Event as Event
|
||||
import Internal.Room as Internal
|
||||
import Internal.Tools.Exceptions as X
|
||||
import Json.Decode as D
|
||||
import Task exposing (Task)
|
||||
|
||||
|
||||
{-| A room represents a channel of communication within a Matrix home server.
|
||||
|
|
|
@ -32,10 +32,8 @@ Once you have the event you want, you can explore it with the following function
|
|||
|
||||
import Internal.Api.VaultUpdate exposing (VaultUpdate)
|
||||
import Internal.Invite as Internal
|
||||
import Internal.Tools.Exceptions as X
|
||||
import Internal.Values.RoomInvite as IR
|
||||
import Json.Encode as E
|
||||
import Task exposing (Task)
|
||||
|
||||
|
||||
{-| The `RoomInvite` type serves as an invite to a given room.
|
||||
|
|
|
@ -0,0 +1,27 @@
|
|||
module Matrix.Settings exposing (..)
|
||||
{-| There are a lot of settings that you can change!
|
||||
|
||||
These settings change how the Vault interacts with the Matrix API.
|
||||
You can adjust these values for performance reasons, for customizability, benchmarking,
|
||||
or maybe just because you like it. :)
|
||||
|
||||
It is common to set all settings in the `init` function, but you can adjust all settings on the fly.
|
||||
-}
|
||||
|
||||
import Internal.Vault exposing (Vault)
|
||||
|
||||
{-| When your Matrix client synchronizes with the homeserver, the homeserver often
|
||||
responds quite quickly, giving all the information that you need.
|
||||
|
||||
Sometimes, the homeserver has nothing new to report, and instead makes you wait for a response.
|
||||
This is called long-polling, and it's the homeserver waiting for an update to give to you.
|
||||
Long-polling is very useful!
|
||||
|
||||
This setting sets a limit on how long the long-polling should last. It is smart
|
||||
to make this equal to the interval at which you run the `sync` function.
|
||||
|
||||
**Default:** 10 (seconds)
|
||||
-}
|
||||
syncTimeout : Int -> Vault -> Vault
|
||||
syncTimeout timeout =
|
||||
Internal.Vault.settings \data -> { data | syncTimeout = timeout }
|
|
@ -0,0 +1,7 @@
|
|||
# Design refactors to be made
|
||||
|
||||
The following refactors are to be made:
|
||||
|
||||
- We need an `Info` type that informs the user whether data exists, and if not, why it doesn't exist.
|
||||
|
||||
- We need a `Timeline` type that users can use to go through threads in the global room timeline.
|
Loading…
Reference in New Issue