All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @Splunk3, If you're using Splunk Add-on for ServiceNow, new copies of table records are indexed by Splunk whenever the inputs sees a newer timestamp value in e.g. sys_updated_on. You most likely ... See more...
Hi @Splunk3, If you're using Splunk Add-on for ServiceNow, new copies of table records are indexed by Splunk whenever the inputs sees a newer timestamp value in e.g. sys_updated_on. You most likely want to calculate durations from the latest instance of the record indexed by Splunk: index=servicenow sourcetype=snow:change_request short_description IN ("abc", "xyz", "123") | stats latest(short_description) as short_description latest(eval(strptime(dv_closed_at, "%F %T")-strptime(dv_opened_at, "%F %T"))) as duration_secs by sys_id | stats avg(eval(duration_secs/3600)) as avg_duration_hours by short_description I'm using stats aggregation eval shortcuts, but at a high level, the search finds the most recent short_description and calculates the most recent duration by sys_id, and then calculates the average duration by short_description. If your changes are opened for long periods of time before approval and implementation, you may prefer to use the work_start and work_end fields or their equivalents, depending on your use case. Most orgs customize the change workflow and schemas to fit their service model.
Hi @mr103, It shouldn't have worked in 8.2. If present, static/appLogo.png and static/appLogo_2x.png supersede the app.conf [ui] label setting. static/appIcon*.png should be displayed on the home p... See more...
Hi @mr103, It shouldn't have worked in 8.2. If present, static/appLogo.png and static/appLogo_2x.png supersede the app.conf [ui] label setting. static/appIcon*.png should be displayed on the home page, the Apps menu, etc.
Hi @ssbapat, A complete stack trace would reveal more, but in a nutshell, certificate verification failed in the underlying SSL/TLS class. Does "splunk.org.company.com" (or the actual hostname) matc... See more...
Hi @ssbapat, A complete stack trace would reveal more, but in a nutshell, certificate verification failed in the underlying SSL/TLS class. Does "splunk.org.company.com" (or the actual hostname) match either the common name (cn) or a subject alternative name (SAN or subjectAltName) on the server's certificate? Are all certificates in the server's certificate chain trusted by the client?
Anyone? @phanTom 
Hello, After upgrading from 8.2 to 9.1 I noticed a change in the nav bar affecting most of the custom apps. On the right end of the nav bar, where the app logo (file appIcon*.png from the <appnam... See more...
Hello, After upgrading from 8.2 to 9.1 I noticed a change in the nav bar affecting most of the custom apps. On the right end of the nav bar, where the app logo (file appIcon*.png from the <appname>/static folder) is displayed, the app label (which is configured in app.conf as "label" in the [ui] section) is simply not showing. Strangely enough, for some applications, like "Search & Reporting", the text label is still appearing. But for the majority of the 3rd party apps from the splunkbase, and also for my own custom apps, the label is not showing at all. (For the record: the logo icon is showing, but the text label is not) This is very annoying. After some investigation, it seems that it is NOT an issue of some CSS styling. Because according to the Web Inspector in a browser, the html "span" element that should hold the app label, is NOT populated with the value configured in app.conf/[ui]/label. The "span" element is just empty Why is that ? Regards, mr
Jumping in on an aging topic, but you may be able to simplify the SPL, albeit with an unknown impact to performance. (Always test!) | makeresults format=json data="[{\"foo\": {\"field1\": \"value1\"... See more...
Jumping in on an aging topic, but you may be able to simplify the SPL, albeit with an unknown impact to performance. (Always test!) | makeresults format=json data="[{\"foo\": {\"field1\": \"value1\", \"field2\": \"value2\"}}, {\"bar\": {\"field1\": \"value3\", \"field2\": \"value4\"}}, {\"baz\": {\"field2\": \"value5\", \"field3\": \"value6\"}}]" | spath ``` end test data ``` | table *.* | transpose | rex field=column "(?<prefix>[^.]+)\\.(?<suffix>.+)" | foreach row* [ eval value=coalesce('<<FIELD>>', value) ] | xyseries prefix suffix value
Hi @cybersecnutant, If we assume your events are JSON-formatted: {"event.field1": "value1", "event.field2": "value2", "event.field3": "value3"} you can remove the event. field prefix using SEDCMD ... See more...
Hi @cybersecnutant, If we assume your events are JSON-formatted: {"event.field1": "value1", "event.field2": "value2", "event.field3": "value3"} you can remove the event. field prefix using SEDCMD in props.conf on your heavy forwarder: # local/props.conf [your_sourcetype] SEDCMD-strip-event = s/"event\.([^"]+)"/"\1"/g or a combination of RULESET in props.conf and a transform in transforms.conf (together, an ingest action) on either your heavy forwarder or your indexer(s): # local/props.conf [your_sourcetype] RULESET-strip-event = strip-event # local/transforms.conf [strip-event] INGEST_EVAL = _raw:=replace(_raw, "\"event\.([^\"]+)\"", "\"\\1\"") You can also reference the same INGEST_EVAL transform in a TRANSFORMS setting in props.conf. The difference is where in the pipeline the various methods execute. SEDCMD and TRANSFORMS execute between typingQueue and rulesetQueue (along the path for "raw" data), and RULESET executes between rulesetQueue and indexQueue (the injection point for parsed or "cooked" data). Your final raw event would be: {"field1": "value1", "field2": "value2", "field3": "value3"}
Hi @shashankk , as @ITWhisperer said, you have the Priority and TestMQ fields in different events, so you canot correlate them. You have to find a field common to all the events. So If e.g. Q1 (th... See more...
Hi @shashankk , as @ITWhisperer said, you have the Priority and TestMQ fields in different events, so you canot correlate them. You have to find a field common to all the events. So If e.g. Q1 (that's the final part of TestMQ and it's also present in the other events) can be used as key you could run something like this: | makeresults | eval _raw="240105 18:06:03 19287 testget1: ===> TRN@instance.RQ1: 0000002400509150632034-AERG00001A [Priority=Low,ScanPriority=0, Rule: Default Rule]." | append [ | makeresults | eval _raw="240105 18:06:03 19287 testget1: ===> TRN@instance.RQ1: 0000002400540101635213-AERG00000A [Priority=Low,ScanPriority=0, Rule: Default Rule]." ] | append [ | makeresults | eval _raw="240105 18:06:03 19287 testget1: <--- TRN: 0000002481540150632034-AERG00001A - S from [RCV.FROM.TEST.SEP.Q1@QM.ABC123]." ] | append [ | makeresults | eval _raw="240105 18:06:03 19287 testget1: <--- TRN: 0000002400547150635213-AERG00000A - S from [RCV.FROM.TEST.SEP.Q1@QM.ABC123]. "] | append [ | makeresults | eval _raw="240105 18:02:29 72965 testget1: ===> TRN@instance.RQ1: 0000002400540902427245-AERC000f8A [Priority=Medium,ScanPriority=2, Rule: Default Rule]." ] | append [ | makeresults | eval _raw="240105 18:02:29 72965 testget1: ===> TRN@instance.RQ1: 0000001800540152427236-AERC000f7A [Priority=Medium,ScanPriority=2, Rule: Default Rule]."] | append [ | makeresults | eval _raw="240105 18:02:29 72965 testget1: ===> TRN@instance.RQ1: 0000002400540109427216-AERC000f6A [Priority=High,ScanPriority=1, Rule: Default Rule]." ] | rex "\: (?<testgettrn>.*) \- S from" | rex "RCV\.FROM\.(?<TestMQ>.*)\@" | rex field=TestMQ "\w+\.\w+\.(?<key>\w+)" | rex "TRN\@instance\.R(?<key>[^:]++):" | rex "Priority\=(?<Priority>\w+)" | stats values(TestMQ) AS TestMQ count(eval(Priority="Low")) as Low, count(eval(Priority="Medium")) as Medium, count(eval(Priority="High")) as High BY key | fillnull value=0 | addtotals Ciao. Giuseppe
FreeBSD 11 is the only supported version for the 9.1.2 universal forwarder. https://docs.splunk.com/Documentation/Splunk/9.1.2/Installation/Systemrequirements#Unix_operating_systems
Hi @D644832, You can use the undocumented pdfgen service, e.g.: curl -k -X POST -u user:pass https://localhost:8089/services/pdfgen/render?input-dashboard=dashboard_name The response body will hav... See more...
Hi @D644832, You can use the undocumented pdfgen service, e.g.: curl -k -X POST -u user:pass https://localhost:8089/services/pdfgen/render?input-dashboard=dashboard_name The response body will have Content-Type: application/pdf, which you can save directly as a PDF file. See $SPLUNK_HOME/etc/system/bin/pdfgen_endpoint.py or search the community for pdfgen for more information.
If you're alternatively looking for a simple, more direct solution, you can combine stats dc() with eval in any search: | makeresults format=csv data="datetime,cust,prod 1:00 PM,A,100 1:00 PM,A,200 ... See more...
If you're alternatively looking for a simple, more direct solution, you can combine stats dc() with eval in any search: | makeresults format=csv data="datetime,cust,prod 1:00 PM,A,100 1:00 PM,A,200 1:00 PM,A,300 1:00 PM,B,100 1:00 PM,C,100 2:00 PM,A,100 2:00 PM,A,200 2:00 PM,A,300 3:00 PM,D,200" | stats dc(eval(case(prod=="100" OR prod=="200", cust))) as distinct_count_of_cust_where_prod_in_100_200 distinct_count_of_cust_where_prod_in_100_200 -------------------------------------------- 4
Hi @AdrianH, If you'd like to generate distinct counts for an arbitrary number (n) of combinations (2^n-1), you can generate those combinations from a base search and, for example, map the combinati... See more...
Hi @AdrianH, If you'd like to generate distinct counts for an arbitrary number (n) of combinations (2^n-1), you can generate those combinations from a base search and, for example, map the combinations to a subsearch to generate distinct counts. The combinations could also be used to populate a dashboard input field. I've introduced a bitwise AND macro to help with identifying combinations: [bitand_32(2)] args = x, y definition = sum(1 * (floor($x$ / 1) % 2) * (floor($y$ / 1) % 2), 2 * (floor($x$ / 2) % 2) * (floor($y$ / 2) % 2), 4 * (floor($x$ / 4) % 2) * (floor($y$ / 4) % 2), 8 * (floor($x$ / % 2) * (floor($y$ / % 2), 16 * (floor($x$ / 16) % 2) * (floor($y$ / 16) % 2), 32 * (floor($x$ / 32) % 2) * (floor($y$ / 32) % 2), 64 * (floor($x$ / 64) % 2) * (floor($y$ / 64) % 2), 128 * (floor($x$ / 128) % 2 ) * (floor($y$ / 128) % 2), 256 * (floor($x$ / 256) % 2) * (floor($y$ / 256) % 2), 512 * (floor($x$ / 512) % 2) * (floor($y$ / 512) % 2), 1024 * (floor($x$ / 1024) % 2) * (floor($y$ / 1024) % 2), 2048 * (floor($x$ / 2048) % 2) * (floor($y$ / 2048) % 2), 4096 * (fl oor($x$ / 4096) % 2) * (floor($y$ / 4096) % 2), 8192 * (floor($x$ / 8192) % 2) * (floor($y$ / 8192) % 2), 16384 * (floor($x$ / 16384 ) % 2) * (floor($y$ / 16384) % 2), 32768 * (floor($x$ / 32768) % 2) * (floor($y$ / 32768) % 2), 65536 * (floor($x$ / 65536) % 2) * ( floor($y$ / 65536) % 2), 131072 * (floor($x$ / 131072) % 2) * (floor($y$ / 131072) % 2), 262144 * (floor($x$ / 262144) % 2) * (floor ($y$ / 262144) % 2), 524288 * (floor($x$ / 524288) % 2) * (floor($y$ / 524288) % 2), 1048576 * (floor($x$ / 1048576) % 2) * (floor($ y$ / 1048576) % 2), 2097152 * (floor($x$ / 2097152) % 2) * (floor($y$ / 2097152) % 2), 4194304 * (floor($x$ / 4194304) % 2) * (floor ($y$ / 4194304) % 2), 8388608 * (floor($x$ / 8388608) % 2) * (floor($y$ / 8388608) % 2), 16777216 * (floor($x$ / 16777216) % 2) * (f loor($y$ / 16777216) % 2), 33554432 * (floor($x$ / 33554432) % 2) * (floor($y$ / 33554432) % 2), 67108864 * (floor($x$ / 67108864) % 2) * (floor($y$ / 67108864) % 2), 134217728 * (floor($x$ / 134217728) % 2) * (floor($y$ / 134217728) % 2), 268435456 * (floor($x$ / 268435456) % 2) * (floor($y$ / 268435456) % 2), 536870912 * (floor($x$ / 536870912) % 2) * (floor($y$ / 536870912) % 2), 1073741824 * (floor($x$ / 1073741824) % 2) * (floor($y$ / 1073741824) % 2), 2147483648 * (floor($x$ / 2147483648) % 2) * (floor($y$ / 21474836 48) % 2)) iseval = 0 With the macro in hand, we can generate a table of possible combinations and then use the table values as indices into an array of unique prod values to generate combinations: | makeresults format=csv data="datetime,cust,prod 1:00 PM,A,100 1:00 PM,A,200 1:00 PM,A,300 1:00 PM,B,100 1:00 PM,C,100 2:00 PM,A,100 2:00 PM,A,200 2:00 PM,A,300 3:00 PM,D,200" | stats values(prod) as prod | eval x=mvrange(1, pow(2, mvcount(prod))) | eval i=mvrange(0, ceiling(log(mvcount(x), 2))) | mvexpand i | eval i_{i}=pow(2, i) | fields - i | stats values(*) as * | mvexpand x | foreach i_* [ eval y=mvappend(y, case(`bitand_32(x, <<FIELD>>)`==<<FIELD>>, mvindex(prod, log(<<FIELD>>, 2)))) ] | eval prod="(".mvjoin(mvmap(y, "prod=\"".y."\""), " OR ").")" | fields prod prod ---- (prod="100") (prod="200") (prod="100" OR prod="200") (prod="300") (prod="100" OR prod="300") (prod="200" OR prod="300") (prod="100" OR prod="200" OR prod="300") We can use the map command to pass the prod field to an arbitrary number of subsearches to count distinct values: | makeresults format=csv data="datetime,cust,prod 1:00 PM,A,100 1:00 PM,A,200 1:00 PM,A,300 1:00 PM,B,100 1:00 PM,C,100 2:00 PM,A,100 2:00 PM,A,200 2:00 PM,A,300 3:00 PM,D,200" | stats values(prod) as prod | eval x=mvrange(1, pow(2, mvcount(prod))) | eval i=mvrange(0, ceiling(log(mvcount(x), 2))) | mvexpand i | eval i_{i}=pow(2, i) | fields - i | stats values(*) as * | mvexpand x | foreach i_* [ eval y=mvappend(y, case(`bitand_32(x, <<FIELD>>)`==<<FIELD>>, mvindex(prod, log(<<FIELD>>, 2)))) ] | eval prod="(".mvjoin(mvmap(y, "prod=\"".y."\""), " OR ").")" | fields prod | map search="| makeresults format=csv data=\"datetime,cust,prod 1:00 PM,A,100 1:00 PM,A,200 1:00 PM,A,300 1:00 PM,B,100 1:00 PM,C,100 2:00 PM,A,100 2:00 PM,A,200 2:00 PM,A,300 3:00 PM,D,200\" | eval filter=$prod$, match=case(searchmatch(\"$prod$\"), 1) | stats dc(eval(case(match==1, cust))) as cust_distinct_count by filter" maxsearches=10000 filter cust_distinct_count ---------------------------------------- ------------------- (prod="100") 3 (prod="200") 2 (prod="100" OR prod="200") 4 (prod="300") 1 (prod="100" OR prod="300") 3 (prod="200" OR prod="300") 2 (prod="100" OR prod="200" OR prod="300") 4 Note that the map command generates one search per filter value, and scalability is a concern. The maxsearches argument should be a number greater than or equal to 2^n-1. I've used 10000,  which would accommodate n=13 products (2^13-1 = 8191). I'm assuming your actual number of products is much higher. The search that generates combinations can be used on its own, however, and you can dispatch subsequent searches in whatever way makes sense for your dashboard.
Once again, let me ask: Please draw a table to illustrate the output you desire.  Without it, volunteers are wasting time reading mind. For agent.status.policy_refresh_at 2024-01-04T10:31:35.52975... See more...
Once again, let me ask: Please draw a table to illustrate the output you desire.  Without it, volunteers are wasting time reading mind. For agent.status.policy_refresh_at 2024-01-04T10:31:35.529752Z, should UpdateTime be 10:31:35.529752Z?  Do you want 10:31:35.529752?  Do you want 10:31:35.5 as your initial code would have suggested?  Or do you want something totally different? Is the sample output I posted based on your mock data what you expect (save potential difference in format, precision, etc.)? Do you intend to perform numerical comparison with UpdateTime/UpdateDate after this table is established? All these were asked in the previous post. The "report" that you vaguely allude to (again, precise, specific requirement makes good question) suggests (fainly) to me that you will want some numeric calculation after separating UpdateTime from agent.status.policy_refresh_at. (Question 4.)  If so, it also implies that you really need to preserve time zone and not lose precision. (Question 2.)  If my posted output is what you expect (Question 3), one way to achieve this is to apply strptime against this text UpdateTime using a fixed date such as 1970-01-01.  However, Splunk is full of gems like timewrap which I only recently learned from this forum.  It may work a lot better for your use case, but the search will be rather different.  It all depends on the exact output you desire. The moral is: Ask questions that volunteers can meaningfully help.  A good question begins with accurate description/illustration of (anonymized or mock) input/data, precise illustration of desired output, and sufficient explanation of logic (how to do it on paper) between data and desired output.
when using this same search in a dashboard, it only produces the latest event in the table output. The same query from the search app works just fine, I have no idea why... Any thoughts? Are you... See more...
when using this same search in a dashboard, it only produces the latest event in the table output. The same query from the search app works just fine, I have no idea why... Any thoughts? Are you using Dashboard Studio's chained search? (Or even manually crafting a chained search in Simple XML?)  I recently helped uncover a bug of sort where a subtle, weakly documented optimization feature could make search app search and dashboard panel differ. (Do you lose any information between Chain Searches in Dashboards?)  If this is the case, you will need to make sure any variable used in chained search is preserved before main search ends. (Post a new question if you need help on that so future users can easily find what they need.) If you are not using chained search, click the magnifying glass ("Open in Search") under the dashboard panel to compare your original search and the one directly from the panel.  There has to be some subtle difference.
you might be using a wrong HTTP method , try to do POST and it works.
Thanks @yuanliu , Your search produced the output in the format they are looking for perfectly. As such, I will credit you with the correct answer (for my use). However, when using this same search ... See more...
Thanks @yuanliu , Your search produced the output in the format they are looking for perfectly. As such, I will credit you with the correct answer (for my use). However, when using this same search in a dashboard, it only produces the latest event in the table output. The same query from the search app works just fine, I have no idea why... Any thoughts?
Thanks, PickleRick. Your advice is accurate but The output is sorting alphabetically and still includes more than 5 events for whatever reason. I've removed the second subsearch as mentioned, I shoul... See more...
Thanks, PickleRick. Your advice is accurate but The output is sorting alphabetically and still includes more than 5 events for whatever reason. I've removed the second subsearch as mentioned, I should have caught that (I inherited this report). I'll keep testing your suggestions and see if I can make it do what they want.
tried to raise a bug report, it asked me raise as an idea.. so here it is: https://ideas.splunk.com/ideas/EID-I-2176 could you pls upvote it, so that Splunk will resolve it soon, thanks. 
Is there a common field across the two types of events to correlate them together?  Since the field TestMQ and Priority are contained in separate events then just doing a simple stats using TestMQ a... See more...
Is there a common field across the two types of events to correlate them together?  Since the field TestMQ and Priority are contained in separate events then just doing a simple stats using TestMQ as a by-field will not work. But if there is some way to stitch the two event type together first, then you could make it work. I am not familiar enough with the data and the sample size is too small to figure out what that correlation field may be (if it exists at all) but I did put this together as a proof of concept. Example:     <base_search> | rex "(?:\={3}\>|\<\-{3})\s+TRN[^\:]*\:\s+(?<trn>[^\s]+)" | rex "RCV\.FROM\.(?<TestMQ>.*)\@" ``` using this rex as a demonstration of an example of correlation field to link events together ``` | rex "\d+\s+\d{2}(?:\:\d{2}){2}\s+\d+\s+(?<corr_field>[^\:]+)" | bucket span=30m _time ``` attribute extracted TestMQ field values to events with the same correlation field and close proximity in time ``` | eventstats values(TestMQ) as TestMQ by _time, corr_field ``` we can now filter down to events with Priority field available now that they have a TestMQ value contribution ``` | where isnotnull(Priority) | chart count as count over TestMQ by Priority | addtotals fieldname="TotalCount" | fields + TestMQ, Low, Medium, High, TotalCount      Results would look something like this (but probably with more rows with live data) I would have selected "host" as the correlation field for the example but with the 5 sample events, "testserver1.com" didn't appear to have any TestMQ attribution. So I just extracted "testget1" since that was a common value in the logs. I'm not stating this is the correct correlation field by any-means, just for demonstration purposes only. Edit: And I think you could probably do something similar without using an eventstats command provided a corr_field exists with a 1-to-1 mapping with TestMQ value. <base_search> | rex "(?:\={3}\>|\<\-{3})\s+TRN[^\:]*\:\s+(?<trn>[^\s]+)" | rex "RCV\.FROM\.(?<TestMQ>.*)\@" | rex "(?:\={3}\>|\<\-{3})\s+TRN[^\:]*\:\s+(?<trn>[^\s]+)" | rex "RCV\.FROM\.(?<TestMQ>.*)\@" ``` using this rex as a demonstration of an example of correlation field to link events together ``` | rex "\d+\s+\d{2}(?:\:\d{2}){2}\s+\d+\s+(?<corr_field>[^\:]+)" | stats count(eval(Priority=="Low")) as Low, count(eval(Priority=="Medium")) as Medium, count(eval(Priority=="High")) as High, values(TestMQ) as TestMQ by corr_field | fields + TestMQ, Low, Medium, High | addtotals fieldname="TotalCount"  
Awesome! Glad you got it resolved!