I have an automation that sends me a set of issues on the first of the month at 01:00 CST.
issuetype = bug AND status in (Backlog, Blocked, "To Do", Icebox, Open, "In Progress", "In Review") AND "Product Component[Select List (multiple choices)]" not in (One, Two, Three)
I have a query for the same info for past months
issuetype = bug AND status was in (Backlog, Blocked, "To Do", Icebox, Open, "In Progress", "In Review") ON (2022-08-01) AND "Product Component[Select List (multiple choices)]" not in (One, Two, Three)
Both return applicable issues, though each has some unique issues. Since they all fit the criteria, I'd expect the lists would be identical. What nuances am I overlooking?
The first one is looking at the status of the issues at 01:00 CST.
The second is checking to see if the issues were in any of the specified statuses at any time in the 24 hours of the date specified.
I ran the automation and it found 100 issues. I took the timestamp and plugged it into the query, which returned 118.
This leads me to believe there's a difference in what's being searched for, though it's not clear what.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
What do you mean you took the time stamp and "plugged it into the query"? What did that query look like after you updated it?
Did you review the two lists to find the issues that were in one and not in the other, and then review the History for those issues?
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Here's a screenshot from the automation log
I took that date and time and ran the query looking for issues matching at that point in time.
issuetype = bug AND "Product Component[Select List (multiple choices)]" not in (One, Two, Three) AND status was in (Backlog, Blocked, "To Do", Icebox, Open, "In Progress", "In Review") ON "2022/09/09 14:47"
They return a different number of results.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hm, this is interesting.
I tried a very small data set of 4 newly created issues. I changed one to In Progress.
I ran a scheduled automation like you have for
status in (Backlog)
...and got the three issues I expected. Then, like you, I took the timestamp from the issue run and used Advanced Issue Search to execute the query:
status was in (Backlog) on "<timestamp>"
...and that result included the fourth issue currently in the In Progress status.
I think I know what happened in my case. Maybe this will explain your discrepancies also.
My user account Preference time zone is set to Los Angeles. The default time zone for my instance is UTC; a 7 hour difference.
In the Audit Log for the Rule, the time stamp display is adjusted for my account Preference timezone.
The time of change for changes to an issue is stored in the database to match the timezone of the instance.
I have determined that the <timestamp> value used in the Advanced Issue Search screen for the ON predicate is being compared to the timestamp of status changes as they are recorded in the database - which for me is UTC (the time zone setting of the instance).
So, when I put "2022-09-09 14:44" (the Los Angeles-relative time stamp from the audit log) into my advanced search, it was assumed by the comparison to be UTC (the timezone of date/times in the issue change log). 7 hours ago that fourth issue of mine was still in the Backlog status. When I modified that timestamp to align with the UTC time at which the status change occurred for the fourth issue, I found that the fourth issue was properly excluded.
Given that information, does that explain the discrepancies in your case also?
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Alright, headway!
Searching for current issues, noting UTC, then searching for issues WAS IN UTC time, is essentially 1:1, about 120 issues.
It seems the 'bug' is in the automation. For half the projects, it returns a 1:1 match with the manually run queries. It's not returning any results for the other half.
It's a global automation, set to All Projects. All parameters match the manually executed queries. Hitting validate query returns 120 issues.
Is something truncating how many issues it'll email?
{{#lookupIssues}}
{{issueType}}, {{key}}, {{customfield_13261.value}}, {{customfield_13224.value}}, {{customfield_13264.value}}, {{created.jiraDate}}
{{^last}}
{{/}}
{{/}}
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
For the Free plan there is a 100 emails/24 hours limit, but your post is tagged as Premium so that should not apply to you.
More info on automation limits can be found here.
Otherwise I'm not aware of a limitation on sending emails, but I'm wouldn't consider myself an authority on that.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Online forums and learning are now in one easy-to-use experience.
By continuing, you accept the updated Community Terms of Use and acknowledge the Privacy Policy. Your public name, photo, and achievements may be publicly visible and available in search engines.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.