Publish beta 3.0.0

Merge pull request #18 from noordstar/develop
main 3.0.0
noordstar 2024-04-12 15:11:16 +02:00 committed by GitHub
commit c7659b592b
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
26 changed files with 3612 additions and 87 deletions

138
docs/timeline.md Normal file
View File

@ -0,0 +1,138 @@
# Timeline
Given the complex nature of the Timeline design, it deserves some explanation of
the design. This document aims to describe how the Elm SDK designs the Timeline,
so that other projects may learn from it.
## API endpoint disambiguations
Generally speaking, there are a few API endpoints with similar design:
- The [`/sync` endpoint](https://spec.matrix.org/v1.9/client-server-api/#get_matrixclientv3sync),
which gets the events that the homeserver received most recently.
- The [`/messages` endpoint](https://spec.matrix.org/v1.9/client-server-api/#get_matrixclientv3roomsroomidmembers),
which gets any events in the topological order.
As noted in the Matrix spec:
> Events are ordered in this API according to the arrival time of the event on
> the homeserver. This can conflict with other APIs which order events based on
> their partial ordering in the event graph. This can result in duplicate events
> being received (once per distinct API called). Clients SHOULD de-duplicate
> events based on the event ID when this happens.
For this reason, the Elm SDK maintains **two independent timelines** that are tied
together when necessary to form a coherent timeline.
## Elm design
For those unfamiliar, the Elm Architecture breaks into three parts:
- **Model** - the state of the application
- **View** - a way to turn your state into meaningful information
- **Update** - a way to update your state based on the Matrix API
Since these concepts are compartmentalized, it is impossible to make an API call
while executing the **view** function; the Elm SDK must at all times find a way
to represent its state.
## Timeline
Concerning the Matrix timeline, it is meant to create a representation
(**Model**) of the timeline, find a way to represent (**View**) it, and find a
simple way to adjust it with every incoming Matrix API result. (**Update**)
First, we define what a timeline batch is.
### Timeline batch
A timeline batch is something that most Matrix API endpoints return. It is a
little piece of the timeline and contains the following four pieces of
information:
1. A list of events that are part of the timeline.
2. A Filter for which all provided events meet the criteria.
3. An end batch token that functions as an identifier.
4. _(Optional.)_ A start token. If not provided, it indicates the start of the
timeline.
Here's an example of such a timeline batch:
```
|-->[■]->[■]->[●]->[■]->[■]->[●]-->|
| |
|<--- filter: only and --->|
| |
start: end:
<token_1> <token_2>
```
When the Matrix API later returns a batch token that starts with `<token_2>`,
we know that we can connect it to the batch above and make a longer list of
events!
At first, this seems quite simple to connect, but there are some difficulties
that come up along the way.
### Challenge 1: different filters, different locations
When two timeline batches have different filters, we do not know their
respective location. For example, the following two timeline batches COULD
overlap, but it is also possible they don't:
```
|-->[■]->[■]->[●]->[■]->[■]->[●]-->|
| |
|<--- filter: only and --->|
| |
start: end:
<token_1> <token_2>
|-->[★]->[★]->[★]->[★]-->|
| |
|<-- filter: only -->|
| |
start: end:
<token_3> <token_4>
```
Realistically, there is currently no way of knowing without making more API
calls. However, just making more API calls isn't a solution in Elm because of
its architecture.
> **SOLUTION:** As described in the **View** function, we may assume that
overlapping timeline batches have overlapping events. If they overlap yet have
no overlapping events, then their filters must be disjoint. If the filters are
disjoint, we do not care whether they're overlapping.
### Challenge 2: same filters, same spot
Suppose there is a known timeline batch, and we're trying to **Update** the
timeline to represent the timeline between `<token_1>` and `<token_2>` for a
different filter:
```
|-->[■]->[■]->[●]->[■]->[■]->[●]-->|
| |
|<--- filter: only and --->|
| |
start: end:
<token_1> <token_2>
```
If we wish to know what's in there for a different filter `f`, then:
1. If `f` equals the filter from the timeline batch, we can copy the events.
2. If `f` is a subfilter of the batch filter (for example: `only ■`) then we can
copy the events from the given batch, and then locally filter the events
that do no match filter `f`.
3. If the batch filter is a subfilter of `f`, then we can use an API call
between the same batch tokens `<token_1>` and `<token_2>`. In the worst
case, we receive the exact same list of events. In another scenario, we
might discover far more events and receive some new batch value `<token_3>`
in-between `<token_1>` and `<token_2>`.
4. If neither filter is a subfilter of the other and the two are (at least
partially) disjoint, then they do not need to correlate and any other batch
values can be chosen.

View File

@ -3,17 +3,20 @@
"name": "noordstar/elm-matrix-sdk-beta",
"summary": "Matrix SDK for instant communication. Unstable beta version for testing only.",
"license": "EUPL-1.1",
"version": "2.1.2",
"version": "3.0.0",
"exposed-modules": [
"Matrix",
"Matrix.Event",
"Matrix.Settings"
"Matrix.Settings",
"Matrix.User"
],
"elm-version": "0.19.0 <= v < 0.20.0",
"dependencies": {
"elm/core": "1.0.0 <= v < 2.0.0",
"elm/json": "1.0.0 <= v < 2.0.0",
"elm/parser": "1.0.0 <= v < 2.0.0",
"elm/time": "1.0.0 <= v < 2.0.0",
"micahhahn/elm-safe-recursion": "2.0.0 <= v < 3.0.0",
"miniBill/elm-fast-dict": "1.0.0 <= v < 2.0.0"
},
"test-dependencies": {

View File

@ -23,7 +23,7 @@ will assume until overriden by the user.
-}
currentVersion : String
currentVersion =
"beta 2.1.2"
"beta 3.0.0"
{-| The default device name that is being communicated with the Matrix API.

View File

@ -1,5 +1,5 @@
module Internal.Config.Text exposing
( docs, failures, fields
( docs, failures, fields, mappings, logs, parses
, accessTokenFoundLocally, accessTokenExpired, accessTokenInvalid
, versionsFoundLocally, versionsReceived, versionsFailedToDecode
, unsupportedVersionForEndpoint
@ -27,7 +27,7 @@ You should only do this if you know what you're doing.
## Type documentation
@docs docs, failures, fields
@docs docs, failures, fields, mappings, logs, parses
## API Authentication
@ -116,9 +116,14 @@ docs :
, envelope : TypeDocs
, event : TypeDocs
, hashdict : TypeDocs
, ibatch : TypeDocs
, iddict : TypeDocs
, itoken : TypeDocs
, mashdict : TypeDocs
, settings : TypeDocs
, stateManager : TypeDocs
, timeline : TypeDocs
, timelineFilter : TypeDocs
, unsigned : TypeDocs
}
docs =
@ -148,6 +153,24 @@ docs =
, "For example, the hashdict can store events and use their event id as their key."
]
}
, ibatch =
{ name = "IBatch"
, description =
[ "The internal batch tracks a patch of events on the Matrix timeline."
]
}
, iddict =
{ name = "Iddict"
, description =
[ "An iddict automatically handles creating appropriate keys by incrementally assiging a new key to new values."
]
}
, itoken =
{ name = "IToken"
, description =
[ "The IToken connects batches in the timeline and maintains relative order."
]
}
, mashdict =
{ name = "Mashdict"
, description =
@ -167,6 +190,18 @@ docs =
, "Instead of making the user loop through the room's timeline of events, the StateManager offers the user a dictionary-like experience to navigate through the Matrix room state."
]
}
, timeline =
{ name = "Timeline"
, description =
[ "The Timeline tracks events and orders them in a simple way for the user to view them."
]
}
, timelineFilter =
{ name = "Timeline Filter"
, description =
[ "The Timeline Filter allows the user to be very specific about which events they're interested in."
]
}
, unsigned =
{ name = "Unsigned Data"
, description =
@ -179,13 +214,14 @@ docs =
{-| Description of all edge cases where a JSON decoder can fail.
-}
failures : { hashdict : Desc, mashdict : Desc }
failures : { hashdict : Desc, listWithOne : String, mashdict : Desc }
failures =
{ hashdict =
[ "Not all values map to thir respected hash with the given hash function."
[ "Not all values map to their respected hash with the given hash function."
]
, listWithOne = "Expected at least one value in the list - zero found."
, mashdict =
[ "Not all values map to thir respected hash with the given hash function."
[ "Not all values map to their respected hash with the given hash function."
]
}
@ -218,11 +254,41 @@ fields :
, eventType : Desc
, unsigned : Desc
}
, ibatch :
{ end : Desc
, events : Desc
, filter : Desc
, start : Desc
}
, iddict :
{ cursor : Desc
, dict : Desc
}
, itoken :
{ behind : Desc
, ends : Desc
, inFrontOf : Desc
, name : Desc
, starts : Desc
}
, settings :
{ currentVersion : Desc
, deviceName : Desc
, syncTime : Desc
}
, timeline :
{ batches : Desc
, events : Desc
, filledBatches : Desc
, mostRecentBatch : Desc
, tokens : Desc
}
, timelineFilter :
{ senders : Desc
, sendersAllowOthers : Desc
, types : Desc
, typesAllowOthers : Desc
}
, unsigned :
{ age : Desc
, prevContent : Desc
@ -293,6 +359,45 @@ fields =
[ "Contains optional extra information about the event."
]
}
, ibatch =
{ end =
[ "Pointer to the token that ends the internal batch."
]
, events =
[ "List of event IDs contained within the internal batch."
]
, filter =
[ "Filter that indicates how strictly the homeserver has selected when resulting into the given list of events."
]
, start =
[ "Pointer to the token that starts the internal batch."
]
}
, iddict =
{ cursor =
[ "To ensure uniqueness of all keys and to prevent the usage of keys that were previously assigned to older values, the iddict tracks which is the smallest non-negative integer that hasn't been used yet."
]
, dict =
[ "Dictionary that contains all values stored in the iddict."
]
}
, itoken =
{ behind =
[ "This token is behind all tokens in this field."
]
, ends =
[ "This token is in front of the batches in this field."
]
, inFrontOf =
[ "This token is ahead of all tokens in this field."
]
, name =
[ "Opaque value provided by the homeserver."
]
, starts =
[ "This token is at the start of the batches in this field."
]
}
, settings =
{ currentVersion =
[ "Indicates the current version of the Elm SDK."
@ -304,6 +409,40 @@ fields =
[ "Indicates the frequency in miliseconds with which the Elm SDK should long-poll the /sync endpoint."
]
}
, timeline =
{ batches =
[ "Dictionary storing all event batches in the timeline."
]
, events =
[ "Mapping that allows us to quickly zoom in on an event."
]
, filledBatches =
[ "Counter that tracks how many batches are kept by the timeline."
, "Batches are only counted if they are filled by at least one event."
]
, mostRecentBatch =
[ "Tracks the most recent batch that was sent by the homeserver - usually through `/sync`"
]
, tokens =
[ "Index of all the tokens used to connect event batches on the timeline."
]
}
, timelineFilter =
{ senders =
[ "A list of senders that is considered an exception to the infinite pool of \"other\" users"
]
, sendersAllowOthers =
[ "Value that determines whether the infinite pool of others is included."
, "If False, only the users mentioned in `senders` are included. If True, then all users who aren't mentioned in `senders` are included."
]
, types =
[ "A list of event types that is considered an exception to the infinite pool of \"other\" event types."
]
, typesAllowOthers =
[ "Value that determines whether the infinite pool of others is included."
, "If False, only the event types mentioned in `types` are included. If True, then all users who aren't mentioned in `types` are included."
]
}
, unsigned =
{ age =
[ "The time in milliseconds that has elapsed since the event was sent. This field is generated by the local homeserver, and may be incorrect if the local time on at least one of the two servers is out of sync, which can cause the age to either be negative or greater than it actually is."
@ -347,6 +486,57 @@ leakingValueFound leaking_value =
"Found leaking value : " ++ leaking_value
{-| These logs might appear during a process where something unexpected has
happened. Most of these unexpected results, are taken account of by the Elm SDK,
but logged so that the programmer can do something about it.
-}
logs : { keyIsNotAnInt : String -> String }
logs =
{ keyIsNotAnInt =
\key ->
String.concat
[ "Encountered a key `"
, key
, "` that cannot be converted to an Int"
]
}
{-| Function descriptions
-}
mappings : { itokenPTR : TypeDocs }
mappings =
{ itokenPTR =
{ name = "ITokenPTR init"
, description =
[ "Converts an optional string to an Itoken pointer."
]
}
}
{-| Logs for issues that might be found while parsing strings into meaningful data.
-}
parses :
{ historicalUserId : String -> String
, reservedIPs :
{ ipv6Toipv4 : String
, multicast : String
, futureUse : String
, unspecified : String
}
}
parses =
{ historicalUserId = \name -> "Found a historical username `" ++ name ++ "`."
, reservedIPs =
{ ipv6Toipv4 = "Detected a reserved ip address that is formerly used as an IPv6 to IPv4 relay. It is unlikely that this IP Address is real."
, multicast = "Detected a reserved ip address that is used for multicasting. It is unlikely that this IP Address is real."
, futureUse = "Detected a reserves ip address that is reserved for future use. It is unlikely that this IP Address is real if you're running a recent version of the Elm SDK."
, unspecified = "This is an unspecified ip address. It is unlikely that this IP Address is real and someone might try to break something."
}
}
{-| The Matrix homeserver can specify how it wishes to communicate, and the Elm
SDK aims to communicate accordingly. This may fail in some scenarios, however,
in which case it will throw this error.

View File

@ -0,0 +1,350 @@
module Internal.Filter.Timeline exposing
( Filter
, pass, onlySenders, allSendersExcept, onlyTypes, allTypesExcept, fail
, match, run
, and
, subsetOf
, coder, encode, decoder
)
{-|
# Timeline filter
The timeline filter creates filters for looking through a timeline, as well as
for interacting with the Matrix API.
## Timeline
@docs Filter
## Create
@docs pass, onlySenders, allSendersExcept, onlyTypes, allTypesExcept, fail
## Filter
@docs match, run
## Combine
@docs and
## Compare
@docs subsetOf
## JSON coders
@docs coder, encode, decoder
-}
import Internal.Config.Text as Text
import Internal.Grammar.UserId as U
import Internal.Tools.Json as Json
import Json.Decode as D
import Json.Encode as E
import Set exposing (Set)
{-| Placeholder Event type so the real Event doesn't need to be imported.
-}
type alias Event a =
{ a | eventType : String, sender : U.UserID }
{-| The Timeline Filter filters events out of a timeline, guaranteeing that only
the events that meet the given criteria, meet the requirements.
-}
type Filter
= Filter
{ senders : Set String
, sendersAllowOthers : Bool
, types : Set String
, typesAllowOthers : Bool
}
{-| Allow events from all senders, except if they are on the provided list.
If the list is empty, all events are allowed.
-}
allSendersExcept : List String -> Filter
allSendersExcept senders =
case senders of
[] ->
pass
_ :: _ ->
Filter
{ senders = Set.fromList senders
, sendersAllowOthers = True
, types = Set.empty
, typesAllowOthers = True
}
{-| Allow events of every event type, except if they are on the provided list.
If the list is empty, all events are allowed.
-}
allTypesExcept : List String -> Filter
allTypesExcept types =
case types of
[] ->
pass
_ :: _ ->
Filter
{ senders = Set.empty
, sendersAllowOthers = True
, types = Set.fromList types
, typesAllowOthers = True
}
{-| Only allow an event if it meets the criteria of two Filters.
-}
and : Filter -> Filter -> Filter
and (Filter f1) (Filter f2) =
let
stdAnd : Filter
stdAnd =
Filter
{ senders =
case ( f1.sendersAllowOthers, f2.sendersAllowOthers ) of
( True, True ) ->
Set.union f1.senders f2.senders
( True, False ) ->
Set.diff f2.senders f1.senders
( False, True ) ->
Set.diff f1.senders f2.senders
( False, False ) ->
Set.intersect f1.senders f2.senders
, sendersAllowOthers = f1.sendersAllowOthers && f2.sendersAllowOthers
, types =
case ( f1.typesAllowOthers, f2.typesAllowOthers ) of
( True, True ) ->
Set.union f1.types f2.types
( True, False ) ->
Set.diff f2.types f1.types
( False, True ) ->
Set.diff f1.types f2.types
( False, False ) ->
Set.intersect f1.types f2.types
, typesAllowOthers = f1.typesAllowOthers && f2.typesAllowOthers
}
in
case stdAnd of
Filter f ->
if Set.isEmpty f.senders && not f.sendersAllowOthers then
fail
else if Set.isEmpty f.types && not f.typesAllowOthers then
fail
else
stdAnd
{-| Define how to encode and decode a Timeline Filter to and from a JSON value.
-}
coder : Json.Coder Filter
coder =
Json.object4
{ name = Text.docs.timelineFilter.name
, description = Text.docs.timelineFilter.description
, init =
\a b c d ->
Filter
{ senders = a
, sendersAllowOthers = b
, types = c
, typesAllowOthers = d
}
}
(Json.field.optional.withDefault
{ fieldName = "senders"
, toField = \(Filter f) -> f.senders
, description = Text.fields.timelineFilter.senders
, coder = Json.set Json.string
, default = ( Set.empty, [] )
, defaultToString = always "[]"
}
)
(Json.field.required
{ fieldName = "sendersAllowOthers"
, toField = \(Filter f) -> f.sendersAllowOthers
, description = Text.fields.timelineFilter.sendersAllowOthers
, coder = Json.bool
}
)
(Json.field.optional.withDefault
{ fieldName = "types"
, toField = \(Filter f) -> f.types
, description = Text.fields.timelineFilter.types
, coder = Json.set Json.string
, default = ( Set.empty, [] )
, defaultToString = always "[]"
}
)
(Json.field.required
{ fieldName = "typesAllowOthers"
, toField = \(Filter f) -> f.typesAllowOthers
, description = Text.fields.timelineFilter.typesAllowOthers
, coder = Json.bool
}
)
{-| Decode a Filter from a JSON value.
-}
decoder : Json.Decoder Filter
decoder =
Json.decode coder
{-| Encode a Filter into a JSON value.
-}
encode : Json.Encoder Filter
encode =
Json.encode coder
{-| Allow no events. This filter is likely quite useless in practice, but it is
used in the test environment for sanity checks and comparisons.
-}
fail : Filter
fail =
Filter
{ senders = Set.empty
, sendersAllowOthers = False
, types = Set.empty
, typesAllowOthers = False
}
{-| Determine whether an event passes a filter.
-}
match : Filter -> Event a -> Bool
match (Filter f) { eventType, sender } =
let
mentionedSender : Bool
mentionedSender =
Set.member (U.toString sender) f.senders
mentionedType : Bool
mentionedType =
Set.member eventType f.types
in
xor mentionedSender f.sendersAllowOthers
&& xor mentionedType f.typesAllowOthers
{-| Only allow event sent by given senders.
If an empty list is given, no events are allowed.
-}
onlySenders : List String -> Filter
onlySenders senders =
case senders of
[] ->
fail
_ :: _ ->
Filter
{ senders = Set.fromList senders
, sendersAllowOthers = False
, types = Set.empty
, typesAllowOthers = True
}
{-| Only allow events of a given event type.
If an empty list is given, no events are allowed.
-}
onlyTypes : List String -> Filter
onlyTypes types =
case types of
[] ->
fail
_ :: _ ->
Filter
{ senders = Set.empty
, sendersAllowOthers = True
, types = Set.fromList types
, typesAllowOthers = False
}
{-| Create a filter that allows all events. This can be useful when trying to
combine multiple filters, or when simply all events are allowed.
-}
pass : Filter
pass =
Filter
{ senders = Set.empty
, sendersAllowOthers = True
, types = Set.empty
, typesAllowOthers = True
}
{-| Use a filter on a list of events.
-}
run : Filter -> List (Event a) -> List (Event a)
run f events =
List.filter (match f) events
{-| Determine whether the second argument is a subset filter of the first
argument.
-}
subsetOf : Filter -> Filter -> Bool
subsetOf (Filter big) (Filter small) =
let
isSSof : Set String -> Set String -> Bool
isSSof b s =
Set.intersect b s == s
isSubsetFor : ( Bool, Set String ) -> ( Bool, Set String ) -> Bool
isSubsetFor ( bb, sb ) ( bs, ss ) =
case ( bb, bs ) of
( True, True ) ->
isSSof ss sb
( True, False ) ->
Set.isEmpty (Set.intersect sb ss)
( False, True ) ->
False
( False, False ) ->
isSSof sb ss
in
isSubsetFor
( big.sendersAllowOthers, big.senders )
( small.sendersAllowOthers, small.senders )
&& isSubsetFor
( big.typesAllowOthers, big.types )
( small.typesAllowOthers, small.types )

View File

@ -0,0 +1,279 @@
module Internal.Grammar.ServerName exposing
( ServerName, toString, fromString
, serverNameParser
, HostName(..)
)
{-|
# Server name
A homeserver is uniquely identified by its server name. The server name
represents the address at which the homeserver in question can be reached by
other homeservers.
@docs ServerName, toString, fromString
## Parser
@docs serverNameParser
## Debug
@docs HostName
-}
import Internal.Tools.ParserExtra as PE
import Parser as P exposing ((|.), (|=), Parser)
{-| The hostname is the location where the server can be found.
Notice how the Matrix spec specifies that the hostname can either be a DNS name,
an IPv4Address or an IPv6Address. Since the IPv4Address is compatible with the
specification of DNS names, however, and RFC1123 (section 2.1) does not require
a client to distinguish them, we treat IPv4Addresses like DNS names.
-}
type HostName
= DNS String
| IPv6 IPv6Address
{-| The IPv6Address is represented by a list of items BEFORE and AFTER the
double colons (::).
-}
type alias IPv6Address =
{ front : List String, back : List String }
{-| The server name is a combination of a hostname and an optional port.
-}
type alias ServerName =
{ host : HostName, port_ : Maybe Int }
{-| Parser for the DNS name record. The Matrix spec bases its grammar on the
standard for internet host names, as specified by RFC1123, section 2.1, with an
extension IPv6 literals.
[RFC-1123 §2.2]
The syntax of a legal Internet host name was specified in RFC-952
[DNS:4]. One aspect of host name syntax is hereby changed: the
restriction on the first character is relaxed to allow either a
letter or a digit. Host software MUST support this more liberal
syntax.
Host software MUST handle host names of up to 63 characters and
SHOULD handle host names of up to 255 characters.
[RFC-952 §Assumptions-1]
A "name" (Net, Host, Gateway, or Domain name) is a text string up
to 24 characters drawn from the alphabet (A-Z), digits (0-9), minus
sign (-), and period (.). Note that periods are only allowed when
they serve to delimit components of "domain style names". (See
RFC-921, "Domain Name System Implementation Schedule", for
background).
-}
dnsNameParser : Parser String
dnsNameParser =
P.chompIf Char.isAlphaNum
|. P.chompWhile (\c -> Char.isAlphaNum c || c == '-' || c == '.')
|> P.getChompedString
{-| Convert a string to a server name.
-}
fromString : String -> Maybe ServerName
fromString s =
P.run (serverNameParser |. P.end) s
|> (\out ->
case out of
Ok _ ->
out
Err e ->
Debug.log "No parse" e
|> always (Debug.log "original" s)
|> always out
)
|> Result.toMaybe
{-| Parse a Hostname.
-}
hostnameParser : Parser HostName
hostnameParser =
P.oneOf
[ P.succeed IPv6
|. P.symbol "["
|= ipv6Parser
|. P.symbol "]"
, P.succeed DNS
|= dnsNameParser
]
{-| Parse all values to the left of the double colon (::)
-}
ipv6LeftParser : Parser (List String)
ipv6LeftParser =
P.oneOf
[ P.succeed []
|. P.symbol ":"
, P.succeed (|>)
|= PE.times 1 7 (ipv6NumParser |. P.symbol ":")
|= P.oneOf
[ P.succeed (\bottom tail -> tail ++ [ bottom ])
|= ipv6NumParser
, P.succeed identity
]
]
{-| Parse an ordinary IPv6 number
-}
ipv6NumParser : Parser String
ipv6NumParser =
P.chompIf Char.isHexDigit
|> P.getChompedString
|> PE.times 1 4
|> P.map String.concat
{-| Parse an IPv6 Address
-}
ipv6Parser : Parser IPv6Address
ipv6Parser =
ipv6LeftParser
|> P.andThen
(\front ->
if List.length front < 8 then
P.succeed (IPv6Address front)
|= ipv6RightParser (8 - 1 - List.length front)
-- The -1 is because :: implies one or more zeroes
else
P.succeed (IPv6Address front [])
)
{-| Parse all values to the right of the double colon (::)
-}
ipv6RightParser : Int -> Parser (List String)
ipv6RightParser n =
if n > 0 then
P.succeed identity
|. P.symbol ":"
|= P.oneOf
[ P.succeed (::)
|= ipv6NumParser
|= PE.times 0
(n - 1)
(P.succeed identity
|. P.symbol ":"
|= ipv6NumParser
)
, P.succeed []
]
else
P.succeed []
|. P.symbol ":"
{-| Convert an IPv6 address to a readable string format
-}
ipv6ToString : IPv6Address -> String
ipv6ToString { front, back } =
(if List.length front == 8 then
front
else if List.length back == 8 then
back
else
List.concat [ front, [ "" ], back ]
)
|> List.intersperse ":"
|> String.concat
portParser : Parser Int
portParser =
P.chompIf Char.isDigit
|. P.chompWhile Char.isDigit
|> P.getChompedString
|> P.andThen
(\v ->
case String.toInt v of
Just i ->
if 0 <= i && i <= 2 ^ 16 - 1 then
P.succeed i
else
P.problem ("Port out of range: " ++ v)
Nothing ->
P.problem "Not a port number"
)
{-| Parse a server name. Generally used by other identifiers that have a server
name as one of its parts.
-}
serverNameParser : Parser ServerName
serverNameParser =
P.succeed ServerName
|= hostnameParser
|= P.oneOf
[ P.succeed Just
|. P.symbol ":"
|= portParser
, P.succeed Nothing
]
{-| Convert a parsed server name back to a string.
-}
toString : ServerName -> String
toString { host, port_ } =
let
hostString : String
hostString =
case host of
DNS name ->
name
IPv6 { front, back } ->
(if List.length front == 8 then
List.intersperse ":" front
else if List.length back == 8 then
List.intersperse ":" back
else
List.concat
[ List.intersperse ":" front
, [ "::" ]
, List.intersperse ":" back
]
)
|> String.concat
|> (\i -> "[" ++ i ++ "]")
portString : String
portString =
port_
|> Maybe.map String.fromInt
|> Maybe.map ((++) ":")
|> Maybe.withDefault ""
in
hostString ++ portString

View File

@ -0,0 +1,128 @@
module Internal.Grammar.UserId exposing
( UserID, toString, fromString
, userIdParser, isHistorical
)
{-|
# User ids
Users within Matrix are uniquely identified by their Matrix user ID. The user
ID is namespaced to the homeserver which allocated the account and has the form:
@localpart:domain
The localpart of a user ID is an opaque identifier for that user. It MUST NOT
be empty, and MUST contain only the characters a-z, 0-9, ., \_, =, -, /, and +.
The domain of a user ID is the server name of the homeserver which allocated
the account.
The length of a user ID, including the @ sigil and the domain, MUST NOT exceed
255 characters.
The complete grammar for a legal user ID is:
user_id = "@" user_id_localpart ":" server_name
user_id_localpart = 1*user_id_char
user_id_char = DIGIT
/ %x61-7A ; a-z
/ "-" / "." / "=" / "_" / "/" / "+"
Older versions of this specification were more tolerant of the characters
permitted in user ID localparts. There are currently active users whose user
IDs do not conform to the permitted character set, and a number of rooms whose
history includes events with a sender which does not conform. In order to
handle these rooms successfully, clients and servers MUST accept user IDs with
localparts from the expanded character set:
extended_user_id_char = %x21-39 / %x3B-7E ; all ASCII printing chars except :
## User ID
@docs UserID, toString, fromString
## Extra
@docs userIdParser, isHistorical
-}
import Internal.Grammar.ServerName as ServerName exposing (ServerName)
import Internal.Tools.ParserExtra as PE
import Parser as P exposing ((|.), (|=), Parser)
{-| The User ID type defining a user.
-}
type alias UserID =
{ localpart : String, domain : ServerName }
{-| Convert a Matrix User ID back into its uniquely identifying string.
-}
fromString : String -> Maybe UserID
fromString =
P.run (userIdParser |. P.end) >> Result.toMaybe
{-| Return a boolean on whether a Matrix user has a historical user ID.
Since this user ID is not SUPPOSED to be legal but clients are nevertheless
forced to support them due to backwards compatibility, clients may occasionally
attempt to break the rules in an attempt to find undefined behaviour.
As a result, an explicit method to spot historical users is added to the SDK.
-}
isHistorical : UserID -> Bool
isHistorical { localpart } =
String.any
(\c ->
let
i : Int
i =
Char.toCode c
in
not ((0x61 <= i && i <= 0x7A) || Char.isAlpha c)
)
localpart
localpartParser : Parser String
localpartParser =
P.chompIf validHistoricalUsernameChar
|> P.getChompedString
|> PE.times 1 255
|> P.map String.concat
{-| Convert a parsed User ID to a string.
-}
toString : UserID -> String
toString { localpart, domain } =
String.concat [ "@", localpart, ":", ServerName.toString domain ]
{-| Parse a UserID from a string.
-}
userIdParser : Parser UserID
userIdParser =
P.succeed UserID
|. P.symbol "@"
|= localpartParser
|. P.symbol ":"
|= ServerName.serverNameParser
|> PE.maxLength 255
validHistoricalUsernameChar : Char -> Bool
validHistoricalUsernameChar c =
let
i : Int
i =
Char.toCode c
in
(0x21 <= i && i <= 0x39) || (0x3B <= i && i <= 0x7E)

View File

@ -3,7 +3,7 @@ module Internal.Tools.Hashdict exposing
, empty, singleton, insert, remove, removeKey
, isEmpty, member, memberKey, get, size, isEqual
, keys, values, toList, fromList
, rehash, union
, rehash, union, map
, coder, encode, decoder, softDecoder
)
@ -35,7 +35,7 @@ This allows you to store values based on an externally defined identifier.
## Transform
@docs rehash, union
@docs rehash, union, map
## JSON coders
@ -189,6 +189,34 @@ keys (Hashdict h) =
Dict.keys h.values
{-| Map a value on a given key. If the outcome of the function changes the hash,
the operation does nothing.
-}
map : String -> (a -> a) -> Hashdict a -> Hashdict a
map key f (Hashdict h) =
Hashdict
{ h
| values =
Dict.update
key
(Maybe.map
(\value ->
let
newValue : a
newValue =
f value
in
if h.hash newValue == h.hash value then
newValue
else
value
)
)
h.values
}
{-| Determine if a value's hash is in a hashdict.
-}
member : a -> Hashdict a -> Bool

View File

@ -3,7 +3,7 @@ module Internal.Tools.Iddict exposing
, empty, singleton, insert, map, remove
, isEmpty, member, get, size
, keys, values
, encode, decoder
, coder, encode, decoder
)
{-| The id-dict is a data type that lets us store values in a dictionary using
@ -36,13 +36,13 @@ do not need to generate identifiers yourself.
## JSON coders
@docs encode, decoder
@docs coder, encode, decoder
-}
import FastDict as Dict exposing (Dict)
import Json.Decode as D
import Json.Encode as E
import Internal.Config.Text as Text
import Internal.Tools.Json as Json
{-| The Iddict data type.
@ -54,41 +54,49 @@ type Iddict a
}
{-| Define how an Iddict can be encoded and decoded to and from a JSON value.
-}
coder : Json.Coder a -> Json.Coder (Iddict a)
coder x =
Json.object2
{ name = Text.docs.iddict.name
, description = Text.docs.iddict.description
, init =
\c d ->
Iddict
{ cursor =
Dict.keys d
|> List.maximum
|> Maybe.map ((+) 1)
|> Maybe.withDefault 0
|> max (Dict.size d)
|> max c
, dict = d
}
}
(Json.field.optional.withDefault
{ fieldName = "cursor"
, toField = \(Iddict i) -> i.cursor
, description = Text.fields.iddict.cursor
, coder = Json.int
, default = ( 0, [] )
, defaultToString = String.fromInt
}
)
(Json.field.required
{ fieldName = "dict"
, toField = \(Iddict i) -> i.dict
, description = Text.fields.iddict.dict
, coder = Json.fastIntDict x
}
)
{-| Decode an id-dict from a JSON value.
-}
decoder : D.Decoder a -> D.Decoder (Iddict a)
decoder xDecoder =
D.map2
(\c pairs ->
let
dict : Dict Int a
dict =
pairs
|> List.filterMap
(\( k, v ) ->
k
|> String.toInt
|> Maybe.map (\n -> ( n, v ))
)
|> Dict.fromList
in
Iddict
{ cursor =
Dict.keys dict
-- Larger than all values in the list
|> List.map ((+) 1)
|> List.maximum
|> Maybe.withDefault 0
|> max (Dict.size dict)
-- At least the dict size
|> max c
-- At least the given value
, dict = dict
}
)
(D.field "cursor" D.int)
(D.field "dict" <| D.keyValuePairs xDecoder)
decoder : Json.Coder a -> Json.Decoder (Iddict a)
decoder x =
Json.decode (coder x)
{-| Create an empty id-dict.
@ -103,16 +111,9 @@ empty =
{-| Encode an id-dict to a JSON value.
-}
encode : (a -> E.Value) -> Iddict a -> E.Value
encode encodeX (Iddict d) =
E.object
[ ( "cursor", E.int d.cursor )
, ( "dict"
, d.dict
|> Dict.toCoreDict
|> E.dict String.fromInt encodeX
)
]
encode : Json.Coder a -> Json.Encoder (Iddict a)
encode x =
Json.encode (coder x)
{-| Get a value from the id-dict using its key.

View File

@ -3,8 +3,8 @@ module Internal.Tools.Json exposing
, Encoder, encode, Decoder, decode, Value
, succeed, fail, andThen, lazy, map
, Docs(..), RequiredField(..), toDocs
, list, slowDict, fastDict, maybe
, Field, field
, list, listWithOne, slowDict, fastDict, fastIntDict, set, maybe
, Field, field, parser
, object2, object3, object4, object5, object6, object7, object8, object9, object10, object11
)
@ -49,7 +49,7 @@ module to build its encoders and decoders.
## Data types
@docs list, slowDict, fastDict, maybe
@docs list, listWithOne, slowDict, fastDict, fastIntDict, set, maybe
## Objects
@ -58,7 +58,7 @@ This section creates objects that can be (re)used in the library's JSON
specification. For this, the user needs to construct fields for the object
first.
@docs Field, field
@docs Field, field, parser
Once all fields are constructed, the user can create JSON objects.
@ -68,11 +68,14 @@ Once all fields are constructed, the user can create JSON objects.
import Dict as SlowDict
import FastDict
import Internal.Config.Log exposing (Log)
import Internal.Config.Log as Log exposing (Log)
import Internal.Config.Text as Text
import Internal.Tools.DecodeExtra as D
import Internal.Tools.EncodeExtra as E
import Json.Decode as D
import Json.Encode as E
import Parser as P
import Set exposing (Set)
{-| A field of type `a` as a subtype of an object `object`.
@ -139,8 +142,10 @@ type Docs
| DocsDict Docs
| DocsFloat
| DocsInt
| DocsIntDict Docs
| DocsLazy (() -> Docs)
| DocsList Docs
| DocsListWithOne Docs
| DocsMap (Descriptive { content : Docs })
| DocsObject
(Descriptive
@ -154,7 +159,9 @@ type Docs
}
)
| DocsOptional Docs
| DocsParser String
| DocsRiskyMap (Descriptive { content : Docs, failure : List String })
| DocsSet Docs
| DocsString
| DocsValue
@ -290,6 +297,48 @@ fastDict (Coder old) =
}
{-| Define a fast dict where the keys are integers, not strings.
-}
fastIntDict : Coder value -> Coder (FastDict.Dict Int value)
fastIntDict (Coder old) =
Coder
{ encoder = FastDict.toCoreDict >> E.dict String.fromInt old.encoder
, decoder =
old.decoder
|> D.keyValuePairs
|> D.map
(\items ->
( items
|> List.map (Tuple.mapSecond Tuple.first)
|> List.filterMap
(\( k, v ) ->
Maybe.map (\a -> ( a, v )) (String.toInt k)
)
|> FastDict.fromList
, List.concat
[ items
|> List.map Tuple.first
|> List.filter
(\k ->
case String.toInt k of
Just _ ->
True
Nothing ->
False
)
|> List.map Text.logs.keyIsNotAnInt
|> List.map Log.log.warn
, items
|> List.map Tuple.second
|> List.concatMap Tuple.second
]
)
)
, docs = DocsIntDict old.docs
}
{-| Create a new field using any of the three provided options.
For example, suppose we are creating a `Field String User` to represent the
@ -465,6 +514,32 @@ list (Coder old) =
}
{-| Define a list that has at least one value
-}
listWithOne : Coder a -> Coder ( a, List a )
listWithOne (Coder old) =
Coder
{ encoder = \( h, t ) -> E.list old.encoder (h :: t)
, decoder =
old.decoder
|> D.list
|> D.andThen
(\items ->
case items of
[] ->
D.fail Text.failures.listWithOne
( h, l1 ) :: t ->
D.succeed
( ( h, List.map Tuple.first items )
, List.concatMap Tuple.second t
|> List.append l1
)
)
, docs = DocsListWithOne old.docs
}
{-| Map a value.
Given that the value needs to be both encoded and decoded, the map function
@ -1079,6 +1154,49 @@ object11 { name, description, init } fa fb fc fd fe ff fg fh fi fj fk =
}
{-| Define a parser that converts a string into a custom Elm type.
-}
parser : { name : String, p : P.Parser ( a, List Log ), toString : a -> String } -> Coder a
parser { name, p, toString } =
Coder
{ encoder = toString >> E.string
, decoder =
D.string
|> D.andThen
(\v ->
case P.run p v of
Err _ ->
D.fail ("Failed to parse " ++ name ++ "!")
Ok o ->
D.succeed o
)
, docs = DocsParser name
}
{-| Define a set.
-}
set : Coder comparable -> Coder (Set comparable)
set (Coder data) =
Coder
{ encoder = E.set data.encoder
, decoder =
data.decoder
|> D.list
|> D.map
(\items ->
( items
|> List.map Tuple.first
|> Set.fromList
, items
|> List.concatMap Tuple.second
)
)
, docs = DocsSet data.docs
}
{-| Define a slow dict from the `elm/core` library.
-}
slowDict : Coder value -> Coder (SlowDict.Dict String value)

View File

@ -3,7 +3,7 @@ module Internal.Tools.Mashdict exposing
, empty, singleton, insert, remove, removeKey
, isEmpty, member, memberKey, get, size, isEqual
, keys, values, toList, fromList
, rehash, union
, rehash, union, map
, coder, encode, decoder, softDecoder
)
@ -43,7 +43,7 @@ In general, you are advised to learn more about the
## Transform
@docs rehash, union
@docs rehash, union, map
## JSON coders
@ -205,6 +205,34 @@ keys (Mashdict h) =
Dict.keys h.values
{-| Map a value on a given key. If the outcome of the function changes the hash,
the operation does nothing.
-}
map : String -> (a -> a) -> Mashdict a -> Mashdict a
map key f (Mashdict h) =
Mashdict
{ h
| values =
Dict.update
key
(Maybe.map
(\value ->
case h.hash (f value) of
Just newHash ->
if newHash == key then
f value
else
value
Nothing ->
value
)
)
h.values
}
{-| Determine if a value's hash is in a mashdict.
-}
member : a -> Mashdict a -> Bool

View File

@ -0,0 +1,105 @@
module Internal.Tools.ParserExtra exposing (..)
import Parser as P exposing ((|.), (|=), Parser)
zeroOrMore : Parser a -> Parser (List a)
zeroOrMore parser =
P.loop []
(\tail ->
P.oneOf
[ P.succeed (\head -> P.Loop (head :: tail))
|= parser
, P.succeed (P.Done (List.reverse tail))
]
)
oneOrMore : Parser a -> Parser (List a)
oneOrMore parser =
P.succeed (::)
|= parser
|= zeroOrMore parser
atLeast : Int -> Parser a -> Parser (List a)
atLeast n parser =
P.loop []
(\tail ->
if List.length tail < n then
P.succeed (\head -> P.Loop (head :: tail))
|= parser
else
P.oneOf
[ P.succeed (\head -> P.Loop (head :: tail))
|= parser
, P.succeed (P.Done (List.reverse tail))
]
)
atMost : Int -> Parser a -> Parser (List a)
atMost n parser =
P.loop []
(\tail ->
if List.length tail < n then
P.oneOf
[ P.succeed (\head -> P.Loop (head :: tail))
|= parser
, P.succeed (P.Done (List.reverse tail))
]
else
P.succeed (P.Done (List.reverse tail))
)
times : Int -> Int -> Parser a -> Parser (List a)
times inf sup parser =
let
low : Int
low =
max 0 (min inf sup)
high : Int
high =
max 0 sup
in
P.loop []
(\tail ->
if List.length tail < low then
P.succeed (\head -> P.Loop (head :: tail))
|= parser
else if List.length tail < high then
P.oneOf
[ P.succeed (\head -> P.Loop (head :: tail))
|= parser
, P.succeed (P.Done (List.reverse tail))
]
else
P.succeed (P.Done (List.reverse tail))
)
exactly : Int -> Parser a -> Parser (List a)
exactly n =
times n n
maxLength : Int -> Parser a -> Parser a
maxLength n parser =
P.succeed
(\start value end ->
if abs (end - start) > n then
P.problem "Parsed too much text!"
else
P.succeed value
)
|= P.getOffset
|= parser
|= P.getOffset
|> P.andThen identity

View File

@ -2,6 +2,7 @@ module Internal.Values.Event exposing
( Event
, UnsignedData(..), age, prevContent, redactedBecause, transactionId
, coder, encode, decoder
, isEqual
)
{-|
@ -24,11 +25,18 @@ of a room.
@docs coder, encode, decoder
## Test functions
@docs isEqual
-}
import Internal.Config.Text as Text
import Internal.Tools.Json as Json
import Internal.Tools.Timestamp as Timestamp exposing (Timestamp)
import Internal.Values.User as User exposing (User)
import Json.Encode as E
{-| The Event type occurs everywhere on a user's timeline.
@ -38,7 +46,7 @@ type alias Event =
, eventId : String
, originServerTs : Timestamp
, roomId : String
, sender : String
, sender : User
, stateKey : Maybe String
, eventType : String
, unsigned : Maybe UnsignedData
@ -105,7 +113,7 @@ coder =
{ fieldName = "sender"
, toField = .sender
, description = Text.fields.event.sender
, coder = Json.string
, coder = User.coder
}
)
(Json.field.optional.value
@ -148,6 +156,65 @@ encode =
Json.encode coder
{-| Compare two events and determine whether they're identical. Used mostly for
testing purposes.
-}
isEqual : Event -> Event -> Bool
isEqual e1 e2 =
if e1.eventId /= e2.eventId then
False
else if e1.originServerTs /= e2.originServerTs then
False
else if e1.roomId /= e2.roomId then
False
else if e1.sender /= e2.sender then
False
else if e1.stateKey /= e2.stateKey then
False
else if e1.eventType /= e2.eventType then
False
else
case ( e1.unsigned, e2.unsigned ) of
( Nothing, Nothing ) ->
True
( Just _, Nothing ) ->
False
( Nothing, Just _ ) ->
False
( Just (UnsignedData d1), Just (UnsignedData d2) ) ->
if d1.age /= d2.age then
False
else if d1.transactionId /= d2.transactionId then
False
else if Maybe.map (E.encode 0) d1.prevContent /= Maybe.map (E.encode 0) d2.prevContent then
False
else
case ( d1.redactedBecause, d2.redactedBecause ) of
( Nothing, Nothing ) ->
True
( Nothing, Just _ ) ->
False
( Just _, Nothing ) ->
False
( Just se1, Just se2 ) ->
isEqual se1 se2
{-| Determine the previous `content` value for this event. This field is only a
`Just value` if the event is a state event, and the Matrix Vault has permission
to see the previous content.

View File

@ -0,0 +1,707 @@
module Internal.Values.Timeline exposing
( Batch, Timeline
, empty, singleton
, mostRecentEvents, mostRecentEventsFrom
, addSync, insert
, coder, encode, decoder
)
{-|
# Timeline
The Timeline data type represents a timeline in the Matrix room. The Matrix room
timeline is quite a complex data type, as it is constantly only partially known
by the Matrix client. This module exposes a data type that helps explore, track
and maintain this room state.
This design of the timeline uses the batches as waypoints to maintain an order.
The Matrix API often returns batches that have the following four pieces of
information:
1. A list of events.
2. A filter for which all of the events meet the criteria.
3. An end batch token.
4. _(Optional)_ A start batch token. If it is not provided, it is the start of
the timeline.
Here's an example of such a timeline batch:
|-->[■]->[■]->[●]->[■]->[■]->[●]-->|
| |
|<-- filter: only and , no -->|
| |
start: end:
<token_1> <token_2>
When the Matrix API later returns a batch token that starts with `<token_2>`,
we know that we can connect it to the batch above and make a longer list of
events!
## Batch
@docs Batch, Timeline
## Create
@docs empty, singleton
## Query
@docs mostRecentEvents, mostRecentEventsFrom
## Manipulate
@docs addSync, insert
## JSON coder
@docs coder, encode, decoder
-}
import FastDict as Dict exposing (Dict)
import Internal.Config.Text as Text
import Internal.Filter.Timeline as Filter exposing (Filter)
import Internal.Tools.Hashdict as Hashdict exposing (Hashdict)
import Internal.Tools.Iddict as Iddict exposing (Iddict)
import Internal.Tools.Json as Json
import Recursion
import Recursion.Traverse
import Set exposing (Set)
{-| A batch is a batch of events that is placed onto the Timeline. Functions
that require an insertion, generally require this data type.
If the `start` value is `Nothing`, it is either the start of the timeline or the
start of the timeline part that the user is allowed to view.
-}
type alias Batch =
{ events : List String
, filter : Filter
, start : Maybe TokenValue
, end : TokenValue
}
{-| Internal batch that's being saved by the Timeline to track a list of events.
-}
type alias IBatch =
{ events : List String
, filter : Filter
, start : ITokenPTR
, end : ITokenPTR
}
{-| Pointer to an IBatch in the Timeline.
-}
type IBatchPTR
= IBatchPTR IBatchPTRValue
{-| Location indicator of an IBatch in the Timeline.
-}
type alias IBatchPTRValue =
Int
{-| Internal token value that's being stored by the Timeline.
If name is `Nothing`, it indicates the start of the timeline.
-}
type alias IToken =
{ name : TokenValue
, starts : Set IBatchPTRValue -- This itoken starts the following batches
, ends : Set IBatchPTRValue -- This itoken ends the following batches
, inFrontOf : Set ITokenPTRValue -- This itoken is in front of the following tokens
, behind : Set ITokenPTRValue -- This itoken is behind the following tokens
}
{-| Pointer to an IToken in the Timeline.
-}
type ITokenPTR
= ITokenPTR ITokenPTRValue
| StartOfTimeline
{-| Location indicator of an IToken in the Timeline.
-}
type alias ITokenPTRValue =
String
{-| The Timeline type represents the timeline state in a Matrix room.
Following the description of the Matrix spec, a timeline contains the following
items:
- Events that indicate timeline events
- Batch values that can be used to paginate through the timeline
The topological shape of the timeline makes older API responses somewhat
unreliable - as a result,
-}
type Timeline
= Timeline
{ batches : Iddict IBatch
, events : Dict String ( IBatchPTR, List IBatchPTR )
, filledBatches : Int
, mostRecentBatch : ITokenPTR
, tokens : Hashdict IToken
}
{-| Opaque token value sent by the Matrix API
-}
type alias TokenValue =
String
{-| Add a new batch as a sync
-}
addSync : Batch -> Timeline -> Timeline
addSync batch timeline =
case insertBatch batch timeline of
( Timeline t, { start, end } ) ->
let
old : ITokenPTR
old =
t.mostRecentBatch
in
case Timeline { t | mostRecentBatch = end } of
tl ->
if old == start then
tl
else
connectITokenToIToken old start tl
{-| Define how a Timeline can be encoded and decoded to and from a JSON value.
-}
coder : Json.Coder Timeline
coder =
Json.object5
{ name = Text.docs.timeline.name
, description = Text.docs.timeline.description
, init =
\a b c d e ->
Timeline
{ batches = a
, events = b
, filledBatches = c
, mostRecentBatch = d
, tokens = e
}
}
(Json.field.required
{ fieldName = "batches"
, toField = \(Timeline t) -> t.batches
, description = Text.fields.timeline.batches
, coder = Iddict.coder coderIBatch
}
)
(Json.field.required
{ fieldName = "events"
, toField = \(Timeline t) -> t.events
, description = Text.fields.timeline.events
, coder = Json.fastDict (Json.listWithOne coderIBatchPTR)
}
)
(Json.field.optional.withDefault
{ fieldName = "filledBatches"
, toField = \(Timeline t) -> t.filledBatches
, description = Text.fields.timeline.filledBatches
, coder = Json.int
, default = ( 0, [] )
, defaultToString = String.fromInt
}
)
(Json.field.required
{ fieldName = "mostRecentBatch"
, toField = \(Timeline t) -> t.mostRecentBatch
, description = Text.fields.timeline.mostRecentBatch
, coder = coderITokenPTR
}
)
(Json.field.required
{ fieldName = "tokens"
, toField = \(Timeline t) -> t.tokens
, description = Text.fields.timeline.tokens
, coder = Hashdict.coder .name coderIToken
}
)
{-| Define how to encode and decode a IBatch to and from a JSON value.
-}
coderIBatch : Json.Coder IBatch
coderIBatch =
Json.object4
{ name = Text.docs.ibatch.name
, description = Text.docs.ibatch.description
, init = IBatch
}
(Json.field.required
{ fieldName = "events"
, toField = .events
, description = Text.fields.ibatch.events
, coder = Json.list Json.string
}
)
(Json.field.required
{ fieldName = "filter"
, toField = .filter
, description = Text.fields.ibatch.filter
, coder = Filter.coder
}
)
(Json.field.required
{ fieldName = "start"
, toField = .start
, description = Text.fields.ibatch.start
, coder = coderITokenPTR
}
)
(Json.field.required
{ fieldName = "end"
, toField = .end
, description = Text.fields.ibatch.end
, coder = coderITokenPTR
}
)
{-| Define how to encode and decode a IBatchPTR to and from a JSON value.
-}
coderIBatchPTR : Json.Coder IBatchPTR
coderIBatchPTR =
Json.map
{ name = Text.docs.itoken.name
, description = Text.docs.itoken.description
, back = \(IBatchPTR value) -> value
, forth = IBatchPTR
}
coderIBatchPTRValue
{-| Define how to encode and decode a IBatchPTRValue to and from a JSON value.
-}
coderIBatchPTRValue : Json.Coder IBatchPTRValue
coderIBatchPTRValue =
Json.int
{-| Define how to encode and decode a IToken to and from a JSON value.
-}
coderIToken : Json.Coder IToken
coderIToken =
Json.object5
{ name = Text.docs.itoken.name
, description = Text.docs.itoken.description
, init = IToken
}
(Json.field.required
{ fieldName = "name"
, toField = .name
, description = Text.fields.itoken.name
, coder = coderTokenValue
}
)
(Json.field.optional.withDefault
{ fieldName = "starts"
, toField = .starts
, description = Text.fields.itoken.starts
, coder = Json.set coderIBatchPTRValue
, default = ( Set.empty, [] )
, defaultToString = always "[]"
}
)
(Json.field.optional.withDefault
{ fieldName = "ends"
, toField = .ends
, description = Text.fields.itoken.ends
, coder = Json.set coderIBatchPTRValue
, default = ( Set.empty, [] )
, defaultToString = always "[]"
}
)
(Json.field.optional.withDefault
{ fieldName = "inFrontOf"
, toField = .inFrontOf
, description = Text.fields.itoken.inFrontOf
, coder = Json.set coderITokenPTRValue
, default = ( Set.empty, [] )
, defaultToString = always "[]"
}
)
(Json.field.optional.withDefault
{ fieldName = "behind"
, toField = .behind
, description = Text.fields.itoken.behind
, coder = Json.set coderITokenPTRValue
, default = ( Set.empty, [] )
, defaultToString = always "[]"
}
)
{-| Define how to encode and decode a ITokenPTR to and from a JSON value.
-}
coderITokenPTR : Json.Coder ITokenPTR
coderITokenPTR =
Json.maybe coderITokenPTRValue
|> Json.map
{ name = Text.mappings.itokenPTR.name
, description = Text.mappings.itokenPTR.description
, back =
\itokenptr ->
case itokenptr of
ITokenPTR name ->
Just name
StartOfTimeline ->
Nothing
, forth =
\value ->
case value of
Just name ->
ITokenPTR name
Nothing ->
StartOfTimeline
}
{-| Define how to encode and decode a ITokenPTRValue to and from a JSON value.
-}
coderITokenPTRValue : Json.Coder ITokenPTRValue
coderITokenPTRValue =
Json.string
{-| Define how to encode and decode a TokenValue to and from a JSON value.
-}
coderTokenValue : Json.Coder TokenValue
coderTokenValue =
Json.string
{-| Append a token at the end of a batch.
-}
connectIBatchToIToken : IBatchPTR -> ITokenPTR -> Timeline -> Timeline
connectIBatchToIToken (IBatchPTR bptr) pointer (Timeline tl) =
case pointer of
StartOfTimeline ->
Timeline tl
ITokenPTR tptr ->
Timeline
{ tl
| batches =
Iddict.map bptr
(\batch -> { batch | end = pointer })
tl.batches
, tokens =
Hashdict.map tptr
(\token -> { token | ends = Set.insert bptr token.ends })
tl.tokens
}
{-| Append a token at the start of a batch.
-}
connectITokenToIBatch : ITokenPTR -> IBatchPTR -> Timeline -> Timeline
connectITokenToIBatch pointer (IBatchPTR bptr) (Timeline tl) =
case pointer of
StartOfTimeline ->
Timeline tl
ITokenPTR tptr ->
Timeline
{ tl
| tokens =
Hashdict.map tptr
(\token -> { token | starts = Set.insert bptr token.starts })
tl.tokens
, batches =
Iddict.map bptr
(\batch -> { batch | start = pointer })
tl.batches
}
{-| Connect two tokens to each other, revealing their relative location.
-}
connectITokenToIToken : ITokenPTR -> ITokenPTR -> Timeline -> Timeline
connectITokenToIToken pointer1 pointer2 (Timeline tl) =
case ( pointer1, pointer2 ) of
( ITokenPTR early, ITokenPTR late ) ->
if early == late then
Timeline tl
else
Timeline
{ tl
| tokens =
tl.tokens
|> Hashdict.map early
(\data ->
{ data | behind = Set.insert late data.behind }
)
|> Hashdict.map late
(\data ->
{ data | inFrontOf = Set.insert early data.inFrontOf }
)
}
( _, _ ) ->
Timeline tl
{-| Timeline JSON decoder that helps decode a Timeline from JSON.
-}
decoder : Json.Decoder Timeline
decoder =
Json.decode coder
{-| Create a new empty timeline.
-}
empty : Timeline
empty =
Timeline
{ batches = Iddict.empty
, events = Dict.empty
, filledBatches = 0
, mostRecentBatch = StartOfTimeline
, tokens = Hashdict.empty .name
}
{-| Directly encode a Timeline into a JSON value.
-}
encode : Json.Encoder Timeline
encode =
Json.encode coder
{-| Get an IBatch from the Timeline.
-}
getIBatch : IBatchPTR -> Timeline -> Maybe IBatch
getIBatch (IBatchPTR ptr) (Timeline { batches }) =
Iddict.get ptr batches
{-| Get an IToken from the Timeline.
-}
getITokenFromPTR : ITokenPTR -> Timeline -> Maybe IToken
getITokenFromPTR pointer (Timeline { tokens }) =
case pointer of
ITokenPTR ptr ->
Hashdict.get ptr tokens
StartOfTimeline ->
Nothing
{-| Insert a batch anywhere else in the timeline.
-}
insert : Batch -> Timeline -> Timeline
insert batch timeline =
timeline
|> insertBatch batch
|> Tuple.first
{-| Insert a batch into the timeline.
-}
insertBatch : Batch -> Timeline -> ( Timeline, { start : ITokenPTR, end : ITokenPTR } )
insertBatch batch timeline =
case batch.start of
Just start ->
timeline
|> invokeIToken start
|> Tuple.mapSecond (invokeIToken batch.end)
|> (\( startPTR, ( endPTR, newTimeline ) ) ->
( insertIBatch
{ events = batch.events
, filter = batch.filter
, start = startPTR
, end = endPTR
}
newTimeline
, { start = startPTR, end = endPTR }
)
)
Nothing ->
timeline
|> invokeIToken batch.end
|> (\( endPTR, newTimeline ) ->
( insertIBatch
{ events = batch.events
, filter = batch.filter
, start = StartOfTimeline
, end = endPTR
}
newTimeline
, { start = StartOfTimeline, end = endPTR }
)
)
{-| Insert an internal batch into the timeline, and determine its result.
-}
insertIBatch : IBatch -> Timeline -> Timeline
insertIBatch ibatch (Timeline tl) =
case Iddict.insert ibatch tl.batches of
( batchPTR, newBatches ) ->
{ tl
| batches = newBatches
, events =
List.foldl
(\event dict ->
Dict.update event
(\value ->
case value of
Nothing ->
Just ( IBatchPTR batchPTR, [] )
Just ( head, tail ) ->
Just ( IBatchPTR batchPTR, head :: tail )
)
dict
)
tl.events
ibatch.events
, filledBatches =
if List.isEmpty ibatch.events then
tl.filledBatches
else
tl.filledBatches + 1
}
|> Timeline
|> connectITokenToIBatch ibatch.start (IBatchPTR batchPTR)
|> connectIBatchToIToken (IBatchPTR batchPTR) ibatch.end
{-| Invoke an itoken to guarantee that it exists.
-}
invokeIToken : TokenValue -> Timeline -> ( ITokenPTR, Timeline )
invokeIToken value (Timeline tl) =
( ITokenPTR value
, Timeline
{ tl
| tokens =
case Hashdict.get value tl.tokens of
Just _ ->
tl.tokens
Nothing ->
Hashdict.insert
{ name = value
, starts = Set.empty
, ends = Set.empty
, inFrontOf = Set.empty
, behind = Set.empty
}
tl.tokens
}
)
{-| Under a given filter, find the most recent events.
-}
mostRecentEvents : Filter -> Timeline -> List (List String)
mostRecentEvents filter (Timeline timeline) =
mostRecentFrom filter (Timeline timeline) timeline.mostRecentBatch
{-| Instead of finding the most recent events from the latest sync, users can
also find the most recent events given a token value.
-}
mostRecentEventsFrom : Filter -> ITokenPTRValue -> Timeline -> List (List String)
mostRecentEventsFrom filter tokenName timeline =
mostRecentFrom filter timeline (ITokenPTR tokenName)
{-| Under a given filter, starting from a given ITokenPTR, find the most recent
events.
-}
mostRecentFrom : Filter -> Timeline -> ITokenPTR -> List (List String)
mostRecentFrom filter timeline ptr =
Recursion.runRecursion
(\p ->
case getITokenFromPTR p.ptr timeline of
Nothing ->
Recursion.base []
Just token ->
if Set.member token.name p.visited then
Recursion.base []
else
token.ends
|> Set.toList
|> List.filterMap (\bptrv -> getIBatch (IBatchPTR bptrv) timeline)
|> List.filter (\ibatch -> Filter.subsetOf ibatch.filter filter)
|> Recursion.Traverse.traverseList
(\ibatch ->
Recursion.recurseThen
{ ptr = ibatch.start, visited = Set.insert token.name p.visited }
(\optionalTimelines ->
case optionalTimelines of
[] ->
List.singleton ibatch.events
|> Recursion.base
_ :: _ ->
optionalTimelines
|> List.map
(\outTimeline ->
List.append outTimeline ibatch.events
)
|> Recursion.base
)
)
|> Recursion.map List.concat
)
{ ptr = ptr, visited = Set.empty }
{-| Recount the Timeline's amount of filled batches. Since the Timeline
automatically tracks the count on itself, this is generally exclusively used in
specific scenarios like decoding JSON values.
-}
recountFilledBatches : Timeline -> Timeline
recountFilledBatches (Timeline tl) =
Timeline
{ tl
| filledBatches =
tl.batches
|> Iddict.values
|> List.filter (\v -> v.events /= [])
|> List.length
}
{-| Create a timeline with a single batch inserted. This batch is considered the
most recent batch, as if created by a sync.
-}
singleton : Batch -> Timeline
singleton b =
insert b empty

View File

@ -0,0 +1,106 @@
module Internal.Values.User exposing
( User, toString, fromString
, localpart, domain
, coder
)
{-| The Matrix user is uniquely identified by their identifier. This User type
helps identify and safely handle these strings to transform them into meaningful
data types.
## User
@docs User, toString, fromString
## Divide
Matrix users are identified by their unique ID. In the Matrix API, this is a
string that looks as follows:
@alice:example.org
\---/ \---------/
| |
| |
localpart domain
Since the username is safely parsed, one can get these parts of the username.
@docs localpart, domain
## JSON
@docs coder
-}
import Internal.Config.Log as Log exposing (log)
import Internal.Grammar.ServerName as ServerName
import Internal.Grammar.UserId as UserId
import Internal.Tools.Json as Json
import Parser as P
{-| The Matrix user represents a user across multiple Matrix rooms.
-}
type alias User =
UserId.UserID
{-| Define a method to encode/decode Matrix users.
-}
coder : Json.Coder User
coder =
Json.parser
{ name = "Username"
, p =
P.andThen
(\name ->
P.succeed
( name
, if UserId.isHistorical name then
[ log.warn "Historical user found"
]
else
[]
)
)
UserId.userIdParser
, toString = UserId.toString
}
{-| The domain represents the Matrix homeserver controlling this user. It also
offers other Matrix homeservers an indication of where to look if you wish to
send a message to this user.
-}
domain : User -> String
domain =
.domain >> ServerName.toString
{-| Parse a string and convert it into a User, if formatted properly.
-}
fromString : String -> Maybe User
fromString =
UserId.fromString
{-| The localpart is similar to a username, in the sense that every user has
their own localpart. The localpart is not unique across multiple servers,
however! There can be a user @alice:example.com and a user @alice:example.org in
a room at the same time.
-}
localpart : User -> String
localpart =
.localpart
{-| Convert a user into its unique identifier string value.
-}
toString : User -> String
toString =
UserId.toString

View File

@ -122,9 +122,10 @@ roomId (Event event) =
{-| Determine the fully-qualified ID of the user who sent an event.
-}
sender : Event -> String
sender : Event -> Types.User
sender (Event event) =
Envelope.extract .sender event
Envelope.map .sender event
|> Types.User
{-| Determine an event's state key.

147
src/Matrix/User.elm Normal file
View File

@ -0,0 +1,147 @@
module Matrix.User exposing
( User, toString
, localpart, domain
, get
)
{-| Matrix users are identified by their unique ID. In the Matrix API, this is a
string that looks as follows:
@alice:example.org
\---/ \---------/
| |
| |
localpart domain
Since it is very easy to abuse Matrix user IDs to sneak in arbitrary values,
the Elm SDK parses them and makes sure they are safe. As a result, you might
need this module to get the right information from a user!
## User
@docs User, toString
## Info
Sometimes, you are more interested in the username itself. These functions can
help you decipher, disambiguate and categorize users based on their username.
@docs localpart, domain
## Manipulate
@docs get
-}
import Internal.Values.Envelope as Envelope
import Internal.Values.User as Internal
import Types exposing (User(..))
{-| The User type represents a Matrix user.
It contains information like:
- Their username on Matrix
- The server that hosts their account
- Access tokens needed to talk to the server
It does **NOT** contain information like:
- Their nickname
- Their profile picture
- Your private room with them
You can get all that information by looking it up in the [Vault](Matrix#Vault).
**Note:** Please do not store this user type as a variable in your model! You
should always maintain a single source of truth in Elm, and the User type
contains various credentials and API tokens that might expire if you don't
update them from the Vault.
If you need to remember specific users, you can best compare their identifying
string using [toString](Matrix-User#toString) or you can use
[get](Matrix-User#get) with the Vault to get the user type.
-}
type alias User =
Types.User
{-| The domain is the name of the server that the user connects to. Server names
are case-sensitive, so if the strings are equal, the users are on the same
server!
As a result, you can use the user domain for:
- When multiple users in a room have the same localpart on different servers
- Finding other users from a potentially malicious homeserver
- Counting homeservers in a room
See the following examples:
domain (get vault "@alice:example.org") -- "example.org"
domain (get vault "@bob:127.0.0.1") -- "127.0.0.1"
domain (get vault "@charlie:[2001:db8::]") -- "[2001:db8::]"
-}
domain : User -> String
domain (User user) =
Envelope.extract Internal.domain user
{-| Get a specific user by their unique identifier.
The Vault is needed as an input because the `User` type also stores various
credentials needed to talk to the Matrix API.
get vault "@alice:example.org" -- Just (User "alice" "example.org")
get vault "@bob:127.0.0.1" -- Just (User "bob" "127.0.0.1")
get vault "@charlie:[2001:db8::]" -- Just (User "charlie" "2001:db8::")
get vault "@evil:#mp#ss#bl#.c#m" -- Nothing
get vault "" -- Nothing
-}
get : Types.Vault -> String -> Maybe User
get (Types.Vault vault) username =
Envelope.mapMaybe (\_ -> Internal.fromString username) vault
|> Maybe.map Types.User
{-| The localpart is the user's unique username. Every homeserver has their own
username registry, so you might occasionally find distinct users with the same
localpart.
The localpart is often used as a user's name in a room if they haven't set up
a custom name.
See the following examples:
localpart (get vault "@alice:example.org") -- "alice"
localpart (get vault "@bob:127.0.0.1") -- "bob"
localpart (get vault "@charlie:[2001:db8::]") -- "charlie"
-}
localpart : User -> String
localpart (User user) =
Envelope.extract Internal.localpart user
{-| Get the uniquely identifying string for this user. Since the strings are
case-sensitive, you can run a simple string comparison to compare usernames.
-}
toString : User -> String
toString (User user) =
Envelope.extract Internal.toString user

View File

@ -1,4 +1,4 @@
module Types exposing (Vault(..), Event(..))
module Types exposing (Vault(..), Event(..), User(..))
{-| The Elm SDK uses a lot of records and values that are easy to manipulate.
Yet, the [Elm design guidelines](https://package.elm-lang.org/help/design-guidelines#keep-tags-and-record-constructors-secret)
@ -12,12 +12,13 @@ access their content directly.
The opaque types are placed in a central module so all exposed modules can
safely access all exposed data types without risking to create circular imports.
@docs Vault, Event
@docs Vault, Event, User
-}
import Internal.Values.Envelope as Envelope
import Internal.Values.Event as Event
import Internal.Values.User as User
import Internal.Values.Vault as Vault
@ -27,6 +28,12 @@ type Event
= Event (Envelope.Envelope Event.Event)
{-| Opaque type for Matrix User
-}
type User
= User (Envelope.Envelope User.User)
{-| Opaque type for Matrix Vault
-}
type Vault

View File

@ -0,0 +1,435 @@
module Test.Filter.Timeline exposing (..)
import Expect
import Fuzz exposing (Fuzzer)
import Internal.Filter.Timeline as Filter exposing (Filter)
import Internal.Grammar.UserId as U
import Internal.Values.Event as Event
import Json.Decode as D
import Json.Encode as E
import Set
import Test exposing (..)
import Test.Values.Event as TestEvent
fuzzer : Fuzzer Filter
fuzzer =
Fuzz.map2 Filter.and
(Fuzz.oneOf
[ Fuzz.map Filter.allSendersExcept (Fuzz.list Fuzz.string)
, Fuzz.map Filter.onlySenders (Fuzz.list Fuzz.string)
, Fuzz.constant Filter.pass
]
)
(Fuzz.oneOf
[ Fuzz.map Filter.allTypesExcept (Fuzz.list Fuzz.string)
, Fuzz.map Filter.onlyTypes (Fuzz.list Fuzz.string)
, Fuzz.constant Filter.pass
]
)
suite : Test
suite =
describe "Timeline filter"
[ describe "Tautological equivalences"
[ test "Pass /= fail"
(Filter.pass
|> Expect.notEqual Filter.fail
|> always
)
, test "All senders == pass"
(Filter.allSendersExcept []
|> Expect.equal Filter.pass
|> always
)
, test "All types == pass"
(Filter.allTypesExcept []
|> Expect.equal Filter.pass
|> always
)
, test "No senders == fail"
(Filter.onlySenders []
|> Expect.equal Filter.fail
|> always
)
, test "No types == fail"
(Filter.onlyTypes []
|> Expect.equal Filter.fail
|> always
)
, fuzz2 Fuzz.string
(Fuzz.list Fuzz.string)
"Some types /= some senders"
(\head tail ->
Expect.notEqual
(Filter.onlyTypes (head :: tail))
(Filter.onlySenders (head :: tail))
)
, fuzz2 fuzzer
fuzzer
"Filter.and f1 f2 == pass iff f1 == f2 == pass"
(\filter1 filter2 ->
Expect.equal
(Filter.and filter1 filter2 == Filter.pass)
(filter1 == Filter.pass && filter2 == Filter.pass)
)
]
, describe "Event filters"
[ fuzz TestEvent.fuzzer
"Only event type filter matches"
(\event ->
event
|> Filter.match (Filter.onlyTypes [ event.eventType ])
|> Expect.equal True
)
, fuzz TestEvent.fuzzer
"Only event sender filter matches"
(\event ->
event
|> Filter.match (Filter.onlySenders [ U.toString event.sender ])
|> Expect.equal True
)
, fuzz TestEvent.fuzzer
"Not event type filter doesn't match"
(\event ->
event
|> Filter.match (Filter.allTypesExcept [ event.eventType ])
|> Expect.equal False
)
, fuzz TestEvent.fuzzer
"Not event sender filter doesn't match"
(\event ->
event
|> Filter.match (Filter.allSendersExcept [ U.toString event.sender ])
|> Expect.equal False
)
, fuzz2 TestEvent.fuzzer
(Fuzz.list Fuzz.string)
"Only matches when in sender list"
(\event senders ->
event
|> Filter.match (Filter.onlySenders senders)
|> Expect.equal (List.member (U.toString event.sender) senders)
)
, fuzz2 TestEvent.fuzzer
(Fuzz.list Fuzz.string)
"Only matches when in type list"
(\event types ->
event
|> Filter.match (Filter.onlyTypes types)
|> Expect.equal (List.member event.eventType types)
)
, fuzz2 TestEvent.fuzzer
(Fuzz.list Fuzz.string)
"All except doesn't match when in sender list"
(\event senders ->
event
|> Filter.match (Filter.allSendersExcept senders)
|> Expect.notEqual (List.member (U.toString event.sender) senders)
)
, fuzz2 TestEvent.fuzzer
(Fuzz.list Fuzz.string)
"All except doesn't match when in type list"
(\event types ->
event
|> Filter.match (Filter.allTypesExcept types)
|> Expect.notEqual (List.member event.eventType types)
)
, fuzz (Fuzz.list Fuzz.string)
"Only list AND all except list = fail senders"
(\senders ->
Filter.onlySenders senders
|> Filter.and (Filter.allSendersExcept senders)
|> Expect.equal Filter.fail
)
, fuzz (Fuzz.list Fuzz.string)
"Only list AND all except list = fail types"
(\types ->
Filter.onlyTypes types
|> Filter.and (Filter.allTypesExcept types)
|> Expect.equal Filter.fail
)
, fuzz2 (Fuzz.list Fuzz.string)
(Fuzz.list Fuzz.string)
"Only list + all except list = common types"
(\t1 t2 ->
Expect.equal
(Filter.and
(Filter.onlyTypes t1)
(Filter.allTypesExcept t2)
)
(Set.diff (Set.fromList t1) (Set.fromList t2)
|> Set.toList
|> Filter.onlyTypes
)
)
, fuzz2 (Fuzz.list Fuzz.string)
(Fuzz.list Fuzz.string)
"Only list + all except list = common senders"
(\t1 t2 ->
Expect.equal
(Filter.and
(Filter.onlySenders t1)
(Filter.allSendersExcept t2)
)
(Set.diff (Set.fromList t1) (Set.fromList t2)
|> Set.toList
|> Filter.onlySenders
)
)
]
, describe "Subset testing"
[ fuzz2 fuzzer
fuzzer
"Combining two filters is always a subset"
(\filter1 filter2 ->
filter1
|> Filter.and filter2
|> Expect.all
[ Filter.subsetOf filter1 >> Expect.equal True
, Filter.subsetOf filter2 >> Expect.equal True
]
)
, fuzz
(Fuzz.bool
|> Fuzz.andThen
(\same ->
if same then
Fuzz.map (\a -> ( a, a )) fuzzer
else
Fuzz.map2 Tuple.pair fuzzer fuzzer
)
)
"subset goes both way iff equal"
(\( filter1, filter2 ) ->
Expect.equal
(filter1 == filter2)
(Filter.subsetOf filter1 filter2
&& Filter.subsetOf filter2 filter1
)
)
, fuzz2 Fuzz.string
(Fuzz.list Fuzz.string)
"One more excluded sender is a subset"
(\head tail ->
Filter.allSendersExcept (head :: tail)
|> Filter.subsetOf (Filter.allSendersExcept tail)
|> Expect.equal True
)
, fuzz2 Fuzz.string
(Fuzz.list Fuzz.string)
"One more excluded type is a subset"
(\head tail ->
Filter.allTypesExcept (head :: tail)
|> Filter.subsetOf (Filter.allTypesExcept tail)
|> Expect.equal True
)
, fuzz2 Fuzz.string
(Fuzz.list Fuzz.string)
"One less included sender is a subset"
(\head tail ->
Filter.onlySenders tail
|> Filter.subsetOf (Filter.onlySenders (head :: tail))
|> Expect.equal True
)
, fuzz2 Fuzz.string
(Fuzz.list Fuzz.string)
"One less included type is a subset"
(\head tail ->
Filter.onlyTypes tail
|> Filter.subsetOf (Filter.onlyTypes (head :: tail))
|> Expect.equal True
)
, fuzz3 Fuzz.string
(Fuzz.list Fuzz.string)
fuzzer
"One more excluded sender is a subset - even when combined with another fuzzer"
(\head tail filter ->
Filter.allSendersExcept (head :: tail)
|> Filter.and filter
|> Filter.subsetOf (Filter.and filter <| Filter.allSendersExcept tail)
|> Expect.equal True
)
, fuzz3 Fuzz.string
(Fuzz.list Fuzz.string)
fuzzer
"One more excluded type is a subset - even when combined with another fuzzer"
(\head tail filter ->
Filter.allTypesExcept (head :: tail)
|> Filter.and filter
|> Filter.subsetOf (Filter.and filter <| Filter.allTypesExcept tail)
|> Expect.equal True
)
, fuzz3 Fuzz.string
(Fuzz.list Fuzz.string)
fuzzer
"One less included sender is a subset - even when combined with another fuzzer"
(\head tail filter ->
Filter.onlySenders tail
|> Filter.and filter
|> Filter.subsetOf (Filter.and filter <| Filter.onlySenders (head :: tail))
|> Expect.equal True
)
, fuzz3 Fuzz.string
(Fuzz.list Fuzz.string)
fuzzer
"One less included type is a subset - even when combined with another fuzzer"
(\head tail filter ->
Filter.onlyTypes tail
|> Filter.and filter
|> Filter.subsetOf (Filter.and filter <| Filter.onlyTypes (head :: tail))
|> Expect.equal True
)
]
, describe "Use case testing"
[ fuzz3 (Fuzz.list TestEvent.fuzzer)
(Fuzz.list Fuzz.string)
(Fuzz.list Fuzz.string)
"Only senders + only type"
(\events senders types ->
let
l1 : List Event.Event
l1 =
events
|> Filter.run
(Filter.and
(Filter.onlySenders senders)
(Filter.onlyTypes types)
)
l2 : List Event.Event
l2 =
List.filter
(\e ->
List.member (U.toString e.sender) senders
&& List.member e.eventType types
)
events
in
Expect.all
[ Expect.equal (List.length l1) (List.length l2)
|> always
, List.map2 Event.isEqual l1 l2
|> List.all identity
|> Expect.equal True
|> always
]
()
)
, fuzz3 (Fuzz.list TestEvent.fuzzer)
(Fuzz.list Fuzz.string)
(Fuzz.list Fuzz.string)
"Only senders + all except type"
(\events senders types ->
let
l1 : List Event.Event
l1 =
events
|> Filter.run
(Filter.and
(Filter.onlySenders senders)
(Filter.allTypesExcept types)
)
l2 : List Event.Event
l2 =
List.filter
(\e ->
List.member (U.toString e.sender) senders
&& (not <| List.member (U.toString e.sender) types)
)
events
in
Expect.all
[ Expect.equal (List.length l1) (List.length l2)
|> always
, List.map2 Event.isEqual l1 l2
|> List.all identity
|> Expect.equal True
|> always
]
()
)
, fuzz3 (Fuzz.list TestEvent.fuzzer)
(Fuzz.list Fuzz.string)
(Fuzz.list Fuzz.string)
"All except senders + only type"
(\events senders types ->
let
l1 : List Event.Event
l1 =
events
|> Filter.run
(Filter.and
(Filter.allSendersExcept senders)
(Filter.onlyTypes types)
)
l2 : List Event.Event
l2 =
List.filter
(\e ->
(not <| List.member (U.toString e.sender) senders)
&& List.member e.eventType types
)
events
in
Expect.all
[ Expect.equal (List.length l1) (List.length l2)
|> always
, List.map2 Event.isEqual l1 l2
|> List.all identity
|> Expect.equal True
|> always
]
()
)
, fuzz3 (Fuzz.list TestEvent.fuzzer)
(Fuzz.list Fuzz.string)
(Fuzz.list Fuzz.string)
"All except senders + all except type"
(\events senders types ->
let
l1 : List Event.Event
l1 =
events
|> Filter.run
(Filter.and
(Filter.allSendersExcept senders)
(Filter.allTypesExcept types)
)
l2 : List Event.Event
l2 =
List.filter
(\e ->
(not <| List.member (U.toString e.sender) senders)
&& (not <| List.member e.eventType types)
)
events
in
Expect.all
[ Expect.equal (List.length l1) (List.length l2)
|> always
, List.map2 Event.isEqual l1 l2
|> List.all identity
|> Expect.equal True
|> always
]
()
)
]
, describe "JSON"
[ fuzz fuzzer
"encode -> decode is the same"
(\filter ->
filter
|> Filter.encode
|> E.encode 0
|> D.decodeString Filter.decoder
|> Expect.equal (Ok ( filter, [] ))
)
]
]

View File

@ -0,0 +1,126 @@
module Test.Grammar.ServerName exposing (..)
import Expect
import Fuzz exposing (Fuzzer)
import Internal.Grammar.ServerName as SN
import Test exposing (..)
dnsFuzzer : Fuzzer String
dnsFuzzer =
Fuzz.map2
(\head tail ->
String.fromList (head :: tail)
)
("abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789"
|> String.toList
|> Fuzz.oneOfValues
)
("abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789-."
|> String.toList
|> Fuzz.oneOfValues
|> Fuzz.listOfLengthBetween 0 (255 - 1)
)
hostnameFuzzer : Fuzzer String
hostnameFuzzer =
Fuzz.oneOf
[ dnsFuzzer
, ipv4Fuzzer
, Fuzz.map (\x -> "[" ++ x ++ "]") ipv6Fuzzer
]
ipv4Fuzzer : Fuzzer String
ipv4Fuzzer =
Fuzz.intRange 0 255
|> Fuzz.listOfLength 4
|> Fuzz.map
(List.map String.fromInt
>> List.intersperse "."
>> String.concat
)
ipv6Fuzzer : Fuzzer String
ipv6Fuzzer =
let
num : Fuzzer String
num =
"0123456789abcdefABCDEF"
|> String.toList
|> Fuzz.oneOfValues
|> Fuzz.listOfLength 4
|> Fuzz.map String.fromList
in
Fuzz.oneOf
[ Fuzz.listOfLength 8 num
|> Fuzz.map (List.intersperse ":")
|> Fuzz.map String.concat
, Fuzz.listOfLengthBetween 0 7 num
|> Fuzz.andThen
(\front ->
num
|> Fuzz.listOfLengthBetween 0 (8 - 1 - List.length front)
|> Fuzz.map
(\back ->
[ front
|> List.intersperse ":"
, [ "::" ]
, back
|> List.intersperse ":"
]
|> List.concat
|> String.concat
)
)
]
portFuzzer : Fuzzer String
portFuzzer =
Fuzz.oneOf
[ Fuzz.constant ""
, Fuzz.intRange 0 65535
|> Fuzz.map (\p -> ":" ++ String.fromInt p)
]
serverNameFuzzer : Fuzzer String
serverNameFuzzer =
Fuzz.map2 (++) hostnameFuzzer portFuzzer
suite : Test
suite =
describe "Server name tests"
[ describe "Checking correct values"
[ fuzz serverNameFuzzer
"Correct server names validate"
(\server ->
SN.fromString server
|> Maybe.map SN.toString
|> Expect.equal (Just server)
)
, test "Checking spec examples"
(\() ->
let
examples : List String
examples =
[ "matrix.org"
, "matrix.org:8888"
, "1.2.3.4"
, "1.2.3.4:1234"
, "[1234:5678::abcd]"
, "[1234:5678::abcd]:5678"
]
in
examples
|> List.map SN.fromString
|> List.map ((/=) Nothing)
|> Expect.equalLists
(List.repeat (List.length examples) True)
)
]
]

View File

@ -0,0 +1,159 @@
module Test.Grammar.UserId exposing (..)
import Expect
import Fuzz exposing (Fuzzer)
import Internal.Grammar.ServerName as SN
import Internal.Grammar.UserId as U
import Test exposing (..)
import Test.Grammar.ServerName as ServerName
modernUserCharFuzzer : Fuzzer Char
modernUserCharFuzzer =
Fuzz.oneOf
[ Fuzz.intRange 0x61 0x7A
|> Fuzz.map Char.fromCode
, "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ"
|> String.toList
|> Fuzz.oneOfValues
]
historicalUserCharFuzzer : Fuzzer Char
historicalUserCharFuzzer =
[ ( 0x21, 0x39 ), ( 0x3B, 0x7E ) ]
|> List.map (\( low, high ) -> Fuzz.intRange low high)
|> Fuzz.oneOf
|> Fuzz.map Char.fromCode
modernUserFuzzer : Fuzzer String
modernUserFuzzer =
Fuzz.map2
(\localpart domain ->
let
maxLocalSize : Int
maxLocalSize =
255 - String.length domain - 2
in
localpart
|> List.take maxLocalSize
|> String.fromList
|> (\l -> "@" ++ l ++ ":" ++ domain)
)
(Fuzz.listOfLengthBetween 1 255 modernUserCharFuzzer)
(ServerName.serverNameFuzzer
|> Fuzz.filter
(\name ->
String.length name < 255 - 2
)
)
historicalUserFuzzer : Fuzzer String
historicalUserFuzzer =
Fuzz.map2
(\localpart domain ->
let
maxLocalSize : Int
maxLocalSize =
255 - String.length domain - 2
in
localpart
|> List.take maxLocalSize
|> String.fromList
|> (\l -> "@" ++ l ++ ":" ++ domain)
)
(Fuzz.listOfLengthBetween 1 255 historicalUserCharFuzzer)
(ServerName.serverNameFuzzer
|> Fuzz.filter
(\name ->
String.length name < 255 - 2
)
)
userFuzzer : Fuzzer String
userFuzzer =
Fuzz.oneOf [ modernUserFuzzer, historicalUserFuzzer ]
fullUserFuzzer : Fuzzer U.UserID
fullUserFuzzer =
userFuzzer
|> Fuzz.map U.fromString
|> Fuzz.map (Maybe.withDefault { localpart = "a", domain = { host = SN.DNS "a", port_ = Nothing } })
suite : Test
suite =
describe "UserId"
[ describe "Size"
[ fuzz ServerName.serverNameFuzzer
"Username cannot be length 0"
(\domain ->
"@"
++ ":"
++ domain
|> U.fromString
|> Expect.equal Nothing
)
, fuzz2 (Fuzz.listOfLengthBetween 1 255 historicalUserCharFuzzer)
ServerName.serverNameFuzzer
"Username length cannot exceed 255"
(\localpart domain ->
let
username : String
username =
"@"
++ String.fromList localpart
++ ":"
++ domain
in
Expect.equal
(U.fromString username == Nothing)
(String.length username > 255)
)
, fuzz modernUserFuzzer
"Modern fuzzer has appropriate size"
(String.length >> Expect.lessThan 256)
, fuzz historicalUserFuzzer
"Historical fuzzer has appropriate size"
(String.length >> Expect.lessThan 256)
, fuzz userFuzzer
"User fuzzers have appropriate size"
(String.length >> Expect.lessThan 256)
]
, describe "From string evaluation"
[ fuzz userFuzzer
"fromString always returns a value on fuzzer"
(U.fromString >> Expect.notEqual Nothing)
, fuzz userFuzzer
"fromString -> toString returns the same value"
(\username ->
username
|> U.fromString
|> Maybe.map U.toString
|> Expect.equal (Just username)
)
-- Not always True
-- TODO: Define a fitting fuzzer for this test
-- , fuzz historicalUserFuzzer
-- "Historical users are historical"
-- (\username ->
-- username
-- |> U.fromString
-- |> Maybe.map U.isHistorical
-- |> Expect.equal (Just True)
-- )
, fuzz modernUserFuzzer
"Modern users are not historical"
(\username ->
username
|> U.fromString
|> Maybe.map U.isHistorical
|> Expect.equal (Just False)
)
]
]

View File

@ -115,7 +115,7 @@ suite =
(\event ->
Hashdict.singleton .eventId event
|> Hashdict.remove event
|> Hashdict.isEqual (Hashdict.empty .sender)
|> Hashdict.isEqual (Hashdict.empty .roomId)
|> Expect.equal True
)
, fuzz TestEvent.fuzzer
@ -123,7 +123,7 @@ suite =
(\event ->
Hashdict.singleton .eventId event
|> Hashdict.removeKey event.eventId
|> Hashdict.isEqual (Hashdict.empty .sender)
|> Hashdict.isEqual (Hashdict.empty .roomId)
|> Expect.equal True
)
, fuzz TestEvent.fuzzer
@ -168,8 +168,8 @@ suite =
|> Json.encode (Hashdict.coder .eventId Event.coder)
|> E.encode indent
|> D.decodeString (Json.decode <| Hashdict.coder .eventId Event.coder)
|> Result.map (Tuple.mapFirst Hashdict.toList)
|> Expect.equal (Ok ( Hashdict.toList hashdict, [] ))
|> Result.map (Tuple.first >> Hashdict.toList)
|> Expect.equal (Ok (Hashdict.toList hashdict))
)
]
]

View File

@ -3,6 +3,7 @@ module Test.Tools.Iddict exposing (..)
import Expect
import Fuzz exposing (Fuzzer)
import Internal.Tools.Iddict as Iddict exposing (Iddict)
import Internal.Tools.Json as Json
import Json.Decode as D
import Json.Encode as E
import Test exposing (..)
@ -73,21 +74,23 @@ empty =
)
, test "JSON encode -> decode -> empty"
(Iddict.empty
|> Iddict.encode identity
|> D.decodeValue (Iddict.decoder D.value)
|> Iddict.encode Json.value
|> D.decodeValue (Iddict.decoder Json.value)
|> Result.map Tuple.first
|> Expect.equal (Ok Iddict.empty)
|> always
)
, test "JSON encode"
(Iddict.empty
|> Iddict.encode identity
|> Iddict.encode Json.value
|> E.encode 0
|> Expect.equal "{\"cursor\":0,\"dict\":{}}"
|> Expect.equal "{\"dict\":{}}"
|> always
)
, test "JSON decode"
("{\"cursor\":0,\"dict\":{}}"
|> D.decodeString (Iddict.decoder D.value)
("{\"dict\":{}}"
|> D.decodeString (Iddict.decoder Json.value)
|> Result.map Tuple.first
|> Expect.equal (Ok Iddict.empty)
|> always
)
@ -170,8 +173,9 @@ singleton =
"JSON encode -> decode -> singleton"
(\single ->
single
|> Iddict.encode E.int
|> D.decodeValue (Iddict.decoder D.int)
|> Iddict.encode Json.int
|> D.decodeValue (Iddict.decoder Json.int)
|> Result.map Tuple.first
|> Expect.equal (Ok single)
)
, fuzz Fuzz.int
@ -179,7 +183,7 @@ singleton =
(\i ->
Iddict.singleton i
|> Tuple.second
|> Iddict.encode E.int
|> Iddict.encode Json.int
|> E.encode 0
|> Expect.equal ("{\"cursor\":1,\"dict\":{\"0\":" ++ String.fromInt i ++ "}}")
)
@ -187,7 +191,8 @@ singleton =
"JSON decode"
(\i ->
("{\"cursor\":1,\"dict\":{\"0\":" ++ String.fromInt i ++ "}}")
|> D.decodeString (Iddict.decoder D.int)
|> D.decodeString (Iddict.decoder Json.int)
|> Result.map Tuple.first
|> Tuple.pair 0
|> Expect.equal (Iddict.singleton i |> Tuple.mapSecond Ok)
)

View File

@ -198,8 +198,8 @@ suite =
|> Json.encode (Mashdict.coder .stateKey Event.coder)
|> E.encode indent
|> D.decodeString (Json.decode <| Mashdict.coder .stateKey Event.coder)
|> Result.map (Tuple.mapFirst Mashdict.toList)
|> Expect.equal (Ok ( Mashdict.toList hashdict, [] ))
|> Result.map (Tuple.first >> Mashdict.toList)
|> Expect.equal (Ok (Mashdict.toList hashdict))
)
]
]

View File

@ -1,9 +1,11 @@
module Test.Values.Event exposing (..)
import Expect
import Fuzz exposing (Fuzzer)
import Internal.Values.Event as Event exposing (Event)
import Json.Encode as E
import Test exposing (..)
import Test.Grammar.UserId as UserId
import Test.Tools.Timestamp as TestTimestamp
@ -14,7 +16,7 @@ fuzzer =
Fuzz.string
TestTimestamp.fuzzer
Fuzz.string
Fuzz.string
UserId.fullUserFuzzer
(Fuzz.maybe Fuzz.string)
Fuzz.string
(Fuzz.maybe unsignedDataFuzzer)
@ -65,3 +67,15 @@ valueFuzzer =
, Fuzz.map (E.list E.string) (Fuzz.list Fuzz.string)
, Fuzz.map Event.encode (Fuzz.lazy (\_ -> fuzzer))
]
suite : Test
suite =
describe "Sanity check"
[ fuzz fuzzer
"event = event"
(\event ->
Event.isEqual event event
|> Expect.equal True
)
]

View File

@ -0,0 +1,383 @@
module Test.Values.Timeline exposing (..)
import Expect
import Fuzz exposing (Fuzzer)
import Internal.Filter.Timeline as Filter exposing (Filter)
import Internal.Tools.Json as Json
import Internal.Values.Timeline as Timeline exposing (Batch, Timeline)
import Json.Decode as D
import Test exposing (..)
import Test.Filter.Timeline as TestFilter
fuzzer : Fuzzer Timeline
fuzzer =
TestFilter.fuzzer
|> Fuzz.andThen
(\globalFilter ->
Fuzz.oneOf
[ Fuzz.map2
(\start batches ->
List.foldl
(\b ( s, f ) ->
( b.end
, f >> Timeline.insert { b | start = Just s, filter = globalFilter }
)
)
( start, identity )
batches
|> Tuple.second
)
Fuzz.string
(Fuzz.listOfLengthBetween 0 10 fuzzerBatch)
, Fuzz.map2
(\start batches ->
List.foldl
(\b ( s, f ) ->
( b.end
, f >> Timeline.insert { b | start = Just s, filter = Filter.and globalFilter b.filter }
)
)
( start, identity )
batches
|> Tuple.second
)
Fuzz.string
(Fuzz.listOfLengthBetween 0 4 fuzzerBatch)
, Fuzz.map2
(\start batches ->
List.foldl
(\b ( s, f ) ->
( b.end
, f >> Timeline.addSync { b | start = Just s, filter = globalFilter }
)
)
( start, identity )
batches
|> Tuple.second
)
Fuzz.string
(Fuzz.listOfLengthBetween 0 10 fuzzerBatch)
, Fuzz.map2
(\start batches ->
List.foldl
(\b ( s, f ) ->
( b.end
, f >> Timeline.addSync { b | start = Just s, filter = Filter.and globalFilter b.filter }
)
)
( start, identity )
batches
|> Tuple.second
)
Fuzz.string
(Fuzz.listOfLengthBetween 0 4 fuzzerBatch)
]
|> Fuzz.listOfLengthBetween 0 10
|> Fuzz.map (List.foldl (<|) Timeline.empty)
)
fuzzerBatch : Fuzzer Batch
fuzzerBatch =
Fuzz.map4 Batch
(Fuzz.list Fuzz.string)
TestFilter.fuzzer
(Fuzz.maybe Fuzz.string)
Fuzz.string
suite : Test
suite =
describe "Timeline"
[ describe "most recent events with filters"
[ fuzz TestFilter.fuzzer
"Events are returned properly"
(\filter ->
Timeline.empty
|> Timeline.insert
{ events = [ "a", "b", "c" ]
, filter = filter
, start = Just "token_1"
, end = "token_2"
}
|> Timeline.insert
{ events = [ "d", "e", "f" ]
, filter = filter
, start = Just "token_2"
, end = "token_3"
}
|> Timeline.mostRecentEventsFrom filter "token_3"
|> Expect.equal
[ [ "a", "b", "c", "d", "e", "f" ] ]
)
, fuzz2 TestFilter.fuzzer
TestFilter.fuzzer
"Sub-events get the same results"
(\f1 f2 ->
let
subFilter =
Filter.and f1 f2
in
Timeline.empty
|> Timeline.insert
{ events = [ "a", "b", "c" ]
, filter = f1
, start = Just "token_1"
, end = "token_2"
}
|> Timeline.insert
{ events = [ "d", "e", "f" ]
, filter = f1
, start = Just "token_2"
, end = "token_3"
}
|> Timeline.mostRecentEventsFrom subFilter "token_3"
|> Expect.equal
[ [ "a", "b", "c", "d", "e", "f" ] ]
)
, fuzz2 TestFilter.fuzzer
TestFilter.fuzzer
"ONLY same result if sub-filter"
(\f1 f2 ->
Timeline.empty
|> Timeline.insert
{ events = [ "a", "b", "c" ]
, filter = f1
, start = Just "token_1"
, end = "token_2"
}
|> Timeline.insert
{ events = [ "d", "e", "f" ]
, filter = f1
, start = Just "token_2"
, end = "token_3"
}
|> Timeline.mostRecentEventsFrom f2 "token_3"
|> (\events ->
Expect.equal
(Filter.subsetOf f1 f2)
(events == [ [ "a", "b", "c", "d", "e", "f" ] ])
)
)
]
, describe "Forks in the road"
[ fuzz2 TestFilter.fuzzer
TestFilter.fuzzer
"Two options returned"
(\f1 f2 ->
let
subFilter =
Filter.and f1 f2
in
Timeline.empty
|> Timeline.insert
{ events = [ "a", "b", "c" ]
, filter = f1
, start = Just "token_1"
, end = "token_2"
}
|> Timeline.insert
{ events = [ "d", "e", "f" ]
, filter = f2
, start = Just "token_3"
, end = "token_2"
}
|> Timeline.insert
{ events = [ "g", "h", "i" ]
, filter = subFilter
, start = Just "token_2"
, end = "token_4"
}
|> Timeline.mostRecentEventsFrom subFilter "token_4"
|> Expect.equal
[ [ "a", "b", "c", "g", "h", "i" ]
, [ "d", "e", "f", "g", "h", "i" ]
]
)
]
, describe "Gaps"
[ fuzz TestFilter.fuzzer
"Gaps leave behind old events"
(\filter ->
Timeline.empty
|> Timeline.insert
{ events = [ "a", "b", "c" ]
, filter = filter
, start = Just "token_1"
, end = "token_2"
}
|> Timeline.insert
{ events = [ "d", "e", "f" ]
, filter = filter
, start = Just "token_3"
, end = "token_4"
}
|> Timeline.mostRecentEventsFrom filter "token_4"
|> Expect.equal [ [ "d", "e", "f" ] ]
)
, fuzz3 TestFilter.fuzzer
(Fuzz.list Fuzz.string)
(Fuzz.pair (Fuzz.list Fuzz.string) (Fuzz.list Fuzz.string))
"Gaps can be bridged"
(\filter l1 ( l2, l3 ) ->
Timeline.empty
|> Timeline.insert
{ events = l1
, filter = filter
, start = Just "token_1"
, end = "token_2"
}
|> Timeline.insert
{ events = l3
, filter = filter
, start = Just "token_3"
, end = "token_4"
}
|> Timeline.insert
{ events = l2
, filter = filter
, start = Just "token_2"
, end = "token_3"
}
|> Timeline.mostRecentEventsFrom filter "token_4"
|> Expect.equal [ List.concat [ l1, l2, l3 ] ]
)
]
, describe "JSON"
[ fuzz fuzzer
"Encode + Decode gives same output"
(\timeline ->
timeline
|> Json.encode Timeline.coder
|> D.decodeValue (Json.decode Timeline.coder)
|> Result.map Tuple.first
|> Result.map (Timeline.mostRecentEvents Filter.pass)
|> Expect.equal (Ok <| Timeline.mostRecentEvents Filter.pass timeline)
)
]
, describe "Weird loops"
[ fuzz TestFilter.fuzzer
"Weird loops stop looping"
(\filter ->
Timeline.empty
|> Timeline.insert
{ events = [ "a", "b", "c" ]
, filter = filter
, start = Just "token_1"
, end = "token_2"
}
|> Timeline.insert
{ events = [ "d", "e", "f" ]
, filter = filter
, start = Just "token_2"
, end = "token_3"
}
|> Timeline.insert
{ events = [ "g", "h", "i" ]
, filter = filter
, start = Just "token_3"
, end = "token_2"
}
|> Timeline.mostRecentEventsFrom filter "token_2"
|> Expect.equal
[ [ "a", "b", "c" ]
, [ "d", "e", "f", "g", "h", "i" ]
]
)
]
, describe "Sync"
[ fuzz TestFilter.fuzzer
"Sync fills gaps"
(\filter ->
Timeline.empty
|> Timeline.addSync
{ events = [ "a", "b", "c" ]
, filter = filter
, start = Just "token_1"
, end = "token_2"
}
|> Timeline.addSync
{ events = [ "f", "g", "h" ]
, filter = filter
, start = Just "token_3"
, end = "token_4"
}
|> Timeline.insert
{ events = [ "d", "e" ]
, filter = filter
, start = Just "token_2"
, end = "token_3"
}
|> Timeline.mostRecentEvents filter
|> Expect.equal [ [ "a", "b", "c", "d", "e", "f", "g", "h" ] ]
)
, fuzz TestFilter.fuzzer
"Sync doesn't fill open gaps"
(\filter ->
Timeline.empty
|> Timeline.addSync
{ events = [ "a", "b", "c" ]
, filter = filter
, start = Just "token_1"
, end = "token_2"
}
|> Timeline.addSync
{ events = [ "f", "g", "h" ]
, filter = filter
, start = Just "token_3"
, end = "token_4"
}
|> Timeline.mostRecentEvents filter
|> Expect.equal [ [ "f", "g", "h" ] ]
)
, fuzz3 (Fuzz.pair Fuzz.string Fuzz.string)
fuzzer
TestFilter.fuzzer
"Getting /sync is the same as getting from the token"
(\( start, end ) timeline filter ->
let
t : Timeline
t =
Timeline.addSync
{ events = [ "a", "b", "c" ]
, filter = filter
, start = Just start
, end = end
}
timeline
in
Expect.equal
(Timeline.mostRecentEvents filter t)
(Timeline.mostRecentEventsFrom filter end t)
)
, fuzz TestFilter.fuzzer
"Weird loops stop looping"
(\filter ->
Timeline.empty
|> Timeline.insert
{ events = [ "a", "b", "c" ]
, filter = filter
, start = Just "token_1"
, end = "token_2"
}
|> Timeline.insert
{ events = [ "d", "e", "f" ]
, filter = filter
, start = Just "token_2"
, end = "token_3"
}
|> Timeline.insert
{ events = [ "g", "h", "i" ]
, filter = filter
, start = Just "token_3"
, end = "token_2"
}
|> Timeline.mostRecentEventsFrom filter "token_2"
|> Expect.equal
[ [ "a", "b", "c" ]
, [ "d", "e", "f", "g", "h", "i" ]
]
)
]
]