All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @cybersecnutant, If we assume your events are JSON-formatted: {"event.field1": "value1", "event.field2": "value2", "event.field3": "value3"} you can remove the event. field prefix using SEDCMD ... See more...
Hi @cybersecnutant, If we assume your events are JSON-formatted: {"event.field1": "value1", "event.field2": "value2", "event.field3": "value3"} you can remove the event. field prefix using SEDCMD in props.conf on your heavy forwarder: # local/props.conf [your_sourcetype] SEDCMD-strip-event = s/"event\.([^"]+)"/"\1"/g or a combination of RULESET in props.conf and a transform in transforms.conf (together, an ingest action) on either your heavy forwarder or your indexer(s): # local/props.conf [your_sourcetype] RULESET-strip-event = strip-event # local/transforms.conf [strip-event] INGEST_EVAL = _raw:=replace(_raw, "\"event\.([^\"]+)\"", "\"\\1\"") You can also reference the same INGEST_EVAL transform in a TRANSFORMS setting in props.conf. The difference is where in the pipeline the various methods execute. SEDCMD and TRANSFORMS execute between typingQueue and rulesetQueue (along the path for "raw" data), and RULESET executes between rulesetQueue and indexQueue (the injection point for parsed or "cooked" data). Your final raw event would be: {"field1": "value1", "field2": "value2", "field3": "value3"}
Hi @shashankk , as @ITWhisperer said, you have the Priority and TestMQ fields in different events, so you canot correlate them. You have to find a field common to all the events. So If e.g. Q1 (th... See more...
Hi @shashankk , as @ITWhisperer said, you have the Priority and TestMQ fields in different events, so you canot correlate them. You have to find a field common to all the events. So If e.g. Q1 (that's the final part of TestMQ and it's also present in the other events) can be used as key you could run something like this: | makeresults | eval _raw="240105 18:06:03 19287 testget1: ===> TRN@instance.RQ1: 0000002400509150632034-AERG00001A [Priority=Low,ScanPriority=0, Rule: Default Rule]." | append [ | makeresults | eval _raw="240105 18:06:03 19287 testget1: ===> TRN@instance.RQ1: 0000002400540101635213-AERG00000A [Priority=Low,ScanPriority=0, Rule: Default Rule]." ] | append [ | makeresults | eval _raw="240105 18:06:03 19287 testget1: <--- TRN: 0000002481540150632034-AERG00001A - S from [RCV.FROM.TEST.SEP.Q1@QM.ABC123]." ] | append [ | makeresults | eval _raw="240105 18:06:03 19287 testget1: <--- TRN: 0000002400547150635213-AERG00000A - S from [RCV.FROM.TEST.SEP.Q1@QM.ABC123]. "] | append [ | makeresults | eval _raw="240105 18:02:29 72965 testget1: ===> TRN@instance.RQ1: 0000002400540902427245-AERC000f8A [Priority=Medium,ScanPriority=2, Rule: Default Rule]." ] | append [ | makeresults | eval _raw="240105 18:02:29 72965 testget1: ===> TRN@instance.RQ1: 0000001800540152427236-AERC000f7A [Priority=Medium,ScanPriority=2, Rule: Default Rule]."] | append [ | makeresults | eval _raw="240105 18:02:29 72965 testget1: ===> TRN@instance.RQ1: 0000002400540109427216-AERC000f6A [Priority=High,ScanPriority=1, Rule: Default Rule]." ] | rex "\: (?<testgettrn>.*) \- S from" | rex "RCV\.FROM\.(?<TestMQ>.*)\@" | rex field=TestMQ "\w+\.\w+\.(?<key>\w+)" | rex "TRN\@instance\.R(?<key>[^:]++):" | rex "Priority\=(?<Priority>\w+)" | stats values(TestMQ) AS TestMQ count(eval(Priority="Low")) as Low, count(eval(Priority="Medium")) as Medium, count(eval(Priority="High")) as High BY key | fillnull value=0 | addtotals Ciao. Giuseppe
FreeBSD 11 is the only supported version for the 9.1.2 universal forwarder. https://docs.splunk.com/Documentation/Splunk/9.1.2/Installation/Systemrequirements#Unix_operating_systems
Hi @D644832, You can use the undocumented pdfgen service, e.g.: curl -k -X POST -u user:pass https://localhost:8089/services/pdfgen/render?input-dashboard=dashboard_name The response body will hav... See more...
Hi @D644832, You can use the undocumented pdfgen service, e.g.: curl -k -X POST -u user:pass https://localhost:8089/services/pdfgen/render?input-dashboard=dashboard_name The response body will have Content-Type: application/pdf, which you can save directly as a PDF file. See $SPLUNK_HOME/etc/system/bin/pdfgen_endpoint.py or search the community for pdfgen for more information.
If you're alternatively looking for a simple, more direct solution, you can combine stats dc() with eval in any search: | makeresults format=csv data="datetime,cust,prod 1:00 PM,A,100 1:00 PM,A,200 ... See more...
If you're alternatively looking for a simple, more direct solution, you can combine stats dc() with eval in any search: | makeresults format=csv data="datetime,cust,prod 1:00 PM,A,100 1:00 PM,A,200 1:00 PM,A,300 1:00 PM,B,100 1:00 PM,C,100 2:00 PM,A,100 2:00 PM,A,200 2:00 PM,A,300 3:00 PM,D,200" | stats dc(eval(case(prod=="100" OR prod=="200", cust))) as distinct_count_of_cust_where_prod_in_100_200 distinct_count_of_cust_where_prod_in_100_200 -------------------------------------------- 4
Hi @AdrianH, If you'd like to generate distinct counts for an arbitrary number (n) of combinations (2^n-1), you can generate those combinations from a base search and, for example, map the combinati... See more...
Hi @AdrianH, If you'd like to generate distinct counts for an arbitrary number (n) of combinations (2^n-1), you can generate those combinations from a base search and, for example, map the combinations to a subsearch to generate distinct counts. The combinations could also be used to populate a dashboard input field. I've introduced a bitwise AND macro to help with identifying combinations: [bitand_32(2)] args = x, y definition = sum(1 * (floor($x$ / 1) % 2) * (floor($y$ / 1) % 2), 2 * (floor($x$ / 2) % 2) * (floor($y$ / 2) % 2), 4 * (floor($x$ / 4) % 2) * (floor($y$ / 4) % 2), 8 * (floor($x$ / % 2) * (floor($y$ / % 2), 16 * (floor($x$ / 16) % 2) * (floor($y$ / 16) % 2), 32 * (floor($x$ / 32) % 2) * (floor($y$ / 32) % 2), 64 * (floor($x$ / 64) % 2) * (floor($y$ / 64) % 2), 128 * (floor($x$ / 128) % 2 ) * (floor($y$ / 128) % 2), 256 * (floor($x$ / 256) % 2) * (floor($y$ / 256) % 2), 512 * (floor($x$ / 512) % 2) * (floor($y$ / 512) % 2), 1024 * (floor($x$ / 1024) % 2) * (floor($y$ / 1024) % 2), 2048 * (floor($x$ / 2048) % 2) * (floor($y$ / 2048) % 2), 4096 * (fl oor($x$ / 4096) % 2) * (floor($y$ / 4096) % 2), 8192 * (floor($x$ / 8192) % 2) * (floor($y$ / 8192) % 2), 16384 * (floor($x$ / 16384 ) % 2) * (floor($y$ / 16384) % 2), 32768 * (floor($x$ / 32768) % 2) * (floor($y$ / 32768) % 2), 65536 * (floor($x$ / 65536) % 2) * ( floor($y$ / 65536) % 2), 131072 * (floor($x$ / 131072) % 2) * (floor($y$ / 131072) % 2), 262144 * (floor($x$ / 262144) % 2) * (floor ($y$ / 262144) % 2), 524288 * (floor($x$ / 524288) % 2) * (floor($y$ / 524288) % 2), 1048576 * (floor($x$ / 1048576) % 2) * (floor($ y$ / 1048576) % 2), 2097152 * (floor($x$ / 2097152) % 2) * (floor($y$ / 2097152) % 2), 4194304 * (floor($x$ / 4194304) % 2) * (floor ($y$ / 4194304) % 2), 8388608 * (floor($x$ / 8388608) % 2) * (floor($y$ / 8388608) % 2), 16777216 * (floor($x$ / 16777216) % 2) * (f loor($y$ / 16777216) % 2), 33554432 * (floor($x$ / 33554432) % 2) * (floor($y$ / 33554432) % 2), 67108864 * (floor($x$ / 67108864) % 2) * (floor($y$ / 67108864) % 2), 134217728 * (floor($x$ / 134217728) % 2) * (floor($y$ / 134217728) % 2), 268435456 * (floor($x$ / 268435456) % 2) * (floor($y$ / 268435456) % 2), 536870912 * (floor($x$ / 536870912) % 2) * (floor($y$ / 536870912) % 2), 1073741824 * (floor($x$ / 1073741824) % 2) * (floor($y$ / 1073741824) % 2), 2147483648 * (floor($x$ / 2147483648) % 2) * (floor($y$ / 21474836 48) % 2)) iseval = 0 With the macro in hand, we can generate a table of possible combinations and then use the table values as indices into an array of unique prod values to generate combinations: | makeresults format=csv data="datetime,cust,prod 1:00 PM,A,100 1:00 PM,A,200 1:00 PM,A,300 1:00 PM,B,100 1:00 PM,C,100 2:00 PM,A,100 2:00 PM,A,200 2:00 PM,A,300 3:00 PM,D,200" | stats values(prod) as prod | eval x=mvrange(1, pow(2, mvcount(prod))) | eval i=mvrange(0, ceiling(log(mvcount(x), 2))) | mvexpand i | eval i_{i}=pow(2, i) | fields - i | stats values(*) as * | mvexpand x | foreach i_* [ eval y=mvappend(y, case(`bitand_32(x, <<FIELD>>)`==<<FIELD>>, mvindex(prod, log(<<FIELD>>, 2)))) ] | eval prod="(".mvjoin(mvmap(y, "prod=\"".y."\""), " OR ").")" | fields prod prod ---- (prod="100") (prod="200") (prod="100" OR prod="200") (prod="300") (prod="100" OR prod="300") (prod="200" OR prod="300") (prod="100" OR prod="200" OR prod="300") We can use the map command to pass the prod field to an arbitrary number of subsearches to count distinct values: | makeresults format=csv data="datetime,cust,prod 1:00 PM,A,100 1:00 PM,A,200 1:00 PM,A,300 1:00 PM,B,100 1:00 PM,C,100 2:00 PM,A,100 2:00 PM,A,200 2:00 PM,A,300 3:00 PM,D,200" | stats values(prod) as prod | eval x=mvrange(1, pow(2, mvcount(prod))) | eval i=mvrange(0, ceiling(log(mvcount(x), 2))) | mvexpand i | eval i_{i}=pow(2, i) | fields - i | stats values(*) as * | mvexpand x | foreach i_* [ eval y=mvappend(y, case(`bitand_32(x, <<FIELD>>)`==<<FIELD>>, mvindex(prod, log(<<FIELD>>, 2)))) ] | eval prod="(".mvjoin(mvmap(y, "prod=\"".y."\""), " OR ").")" | fields prod | map search="| makeresults format=csv data=\"datetime,cust,prod 1:00 PM,A,100 1:00 PM,A,200 1:00 PM,A,300 1:00 PM,B,100 1:00 PM,C,100 2:00 PM,A,100 2:00 PM,A,200 2:00 PM,A,300 3:00 PM,D,200\" | eval filter=$prod$, match=case(searchmatch(\"$prod$\"), 1) | stats dc(eval(case(match==1, cust))) as cust_distinct_count by filter" maxsearches=10000 filter cust_distinct_count ---------------------------------------- ------------------- (prod="100") 3 (prod="200") 2 (prod="100" OR prod="200") 4 (prod="300") 1 (prod="100" OR prod="300") 3 (prod="200" OR prod="300") 2 (prod="100" OR prod="200" OR prod="300") 4 Note that the map command generates one search per filter value, and scalability is a concern. The maxsearches argument should be a number greater than or equal to 2^n-1. I've used 10000,  which would accommodate n=13 products (2^13-1 = 8191). I'm assuming your actual number of products is much higher. The search that generates combinations can be used on its own, however, and you can dispatch subsequent searches in whatever way makes sense for your dashboard.
Once again, let me ask: Please draw a table to illustrate the output you desire.  Without it, volunteers are wasting time reading mind. For agent.status.policy_refresh_at 2024-01-04T10:31:35.52975... See more...
Once again, let me ask: Please draw a table to illustrate the output you desire.  Without it, volunteers are wasting time reading mind. For agent.status.policy_refresh_at 2024-01-04T10:31:35.529752Z, should UpdateTime be 10:31:35.529752Z?  Do you want 10:31:35.529752?  Do you want 10:31:35.5 as your initial code would have suggested?  Or do you want something totally different? Is the sample output I posted based on your mock data what you expect (save potential difference in format, precision, etc.)? Do you intend to perform numerical comparison with UpdateTime/UpdateDate after this table is established? All these were asked in the previous post. The "report" that you vaguely allude to (again, precise, specific requirement makes good question) suggests (fainly) to me that you will want some numeric calculation after separating UpdateTime from agent.status.policy_refresh_at. (Question 4.)  If so, it also implies that you really need to preserve time zone and not lose precision. (Question 2.)  If my posted output is what you expect (Question 3), one way to achieve this is to apply strptime against this text UpdateTime using a fixed date such as 1970-01-01.  However, Splunk is full of gems like timewrap which I only recently learned from this forum.  It may work a lot better for your use case, but the search will be rather different.  It all depends on the exact output you desire. The moral is: Ask questions that volunteers can meaningfully help.  A good question begins with accurate description/illustration of (anonymized or mock) input/data, precise illustration of desired output, and sufficient explanation of logic (how to do it on paper) between data and desired output.
when using this same search in a dashboard, it only produces the latest event in the table output. The same query from the search app works just fine, I have no idea why... Any thoughts? Are you... See more...
when using this same search in a dashboard, it only produces the latest event in the table output. The same query from the search app works just fine, I have no idea why... Any thoughts? Are you using Dashboard Studio's chained search? (Or even manually crafting a chained search in Simple XML?)  I recently helped uncover a bug of sort where a subtle, weakly documented optimization feature could make search app search and dashboard panel differ. (Do you lose any information between Chain Searches in Dashboards?)  If this is the case, you will need to make sure any variable used in chained search is preserved before main search ends. (Post a new question if you need help on that so future users can easily find what they need.) If you are not using chained search, click the magnifying glass ("Open in Search") under the dashboard panel to compare your original search and the one directly from the panel.  There has to be some subtle difference.
you might be using a wrong HTTP method , try to do POST and it works.
Thanks @yuanliu , Your search produced the output in the format they are looking for perfectly. As such, I will credit you with the correct answer (for my use). However, when using this same search ... See more...
Thanks @yuanliu , Your search produced the output in the format they are looking for perfectly. As such, I will credit you with the correct answer (for my use). However, when using this same search in a dashboard, it only produces the latest event in the table output. The same query from the search app works just fine, I have no idea why... Any thoughts?
Thanks, PickleRick. Your advice is accurate but The output is sorting alphabetically and still includes more than 5 events for whatever reason. I've removed the second subsearch as mentioned, I shoul... See more...
Thanks, PickleRick. Your advice is accurate but The output is sorting alphabetically and still includes more than 5 events for whatever reason. I've removed the second subsearch as mentioned, I should have caught that (I inherited this report). I'll keep testing your suggestions and see if I can make it do what they want.
tried to raise a bug report, it asked me raise as an idea.. so here it is: https://ideas.splunk.com/ideas/EID-I-2176 could you pls upvote it, so that Splunk will resolve it soon, thanks. 
Is there a common field across the two types of events to correlate them together?  Since the field TestMQ and Priority are contained in separate events then just doing a simple stats using TestMQ a... See more...
Is there a common field across the two types of events to correlate them together?  Since the field TestMQ and Priority are contained in separate events then just doing a simple stats using TestMQ as a by-field will not work. But if there is some way to stitch the two event type together first, then you could make it work. I am not familiar enough with the data and the sample size is too small to figure out what that correlation field may be (if it exists at all) but I did put this together as a proof of concept. Example:     <base_search> | rex "(?:\={3}\>|\<\-{3})\s+TRN[^\:]*\:\s+(?<trn>[^\s]+)" | rex "RCV\.FROM\.(?<TestMQ>.*)\@" ``` using this rex as a demonstration of an example of correlation field to link events together ``` | rex "\d+\s+\d{2}(?:\:\d{2}){2}\s+\d+\s+(?<corr_field>[^\:]+)" | bucket span=30m _time ``` attribute extracted TestMQ field values to events with the same correlation field and close proximity in time ``` | eventstats values(TestMQ) as TestMQ by _time, corr_field ``` we can now filter down to events with Priority field available now that they have a TestMQ value contribution ``` | where isnotnull(Priority) | chart count as count over TestMQ by Priority | addtotals fieldname="TotalCount" | fields + TestMQ, Low, Medium, High, TotalCount      Results would look something like this (but probably with more rows with live data) I would have selected "host" as the correlation field for the example but with the 5 sample events, "testserver1.com" didn't appear to have any TestMQ attribution. So I just extracted "testget1" since that was a common value in the logs. I'm not stating this is the correct correlation field by any-means, just for demonstration purposes only. Edit: And I think you could probably do something similar without using an eventstats command provided a corr_field exists with a 1-to-1 mapping with TestMQ value. <base_search> | rex "(?:\={3}\>|\<\-{3})\s+TRN[^\:]*\:\s+(?<trn>[^\s]+)" | rex "RCV\.FROM\.(?<TestMQ>.*)\@" | rex "(?:\={3}\>|\<\-{3})\s+TRN[^\:]*\:\s+(?<trn>[^\s]+)" | rex "RCV\.FROM\.(?<TestMQ>.*)\@" ``` using this rex as a demonstration of an example of correlation field to link events together ``` | rex "\d+\s+\d{2}(?:\:\d{2}){2}\s+\d+\s+(?<corr_field>[^\:]+)" | stats count(eval(Priority=="Low")) as Low, count(eval(Priority=="Medium")) as Medium, count(eval(Priority=="High")) as High, values(TestMQ) as TestMQ by corr_field | fields + TestMQ, Low, Medium, High | addtotals fieldname="TotalCount"  
Awesome! Glad you got it resolved!
I just changed the cron job.  I was just running it from the UI.   Once I did that, I started getting alerts.  I need to do some more cleanup, but the problem is solved. 
Just a follow up, are you readjusting the cron schedule to fire soon after making the adjustment to test? I'm not sure if an alert action will trigger by just doing an "Open in Search" or "Run" acti... See more...
Just a follow up, are you readjusting the cron schedule to fire soon after making the adjustment to test? I'm not sure if an alert action will trigger by just doing an "Open in Search" or "Run" action from the UI. I think the scheduler may have to kick off the search for the alert actions to be applied. (unless using the "| sendalert command")
I've tried that and I didn't see anything.  I tried it again and I still don't see the alert firing.
Okay so I think since your trigger condition is      search count>0     It suspect is not firing because there is no field named 'count' for that to evaluate as true. Can you try this settin... See more...
Okay so I think since your trigger condition is      search count>0     It suspect is not firing because there is no field named 'count' for that to evaluate as true. Can you try this setting instead? (it should be the same logic as intended) As long as the KVStore has results in it, then your alert action should trigger every time the scheduler kick off the search.
@_JP : Yes,proper display of information. Probably, it is pretty much custom requirement. After some research, now I am able to do that with sendemail command . < my initial search here> | table hos... See more...
@_JP : Yes,proper display of information. Probably, it is pretty much custom requirement. After some research, now I am able to do that with sendemail command . < my initial search here> | table hostname owner version | outputcsv test.csv | stats values(owner) as email | mvexpand email | eval subject="Test Alert" ,email_body= "This is test email body" | map search="|inputcsv test.csv | where owner=\"$email$\" | sendemail sendcsv=true to=\"$email$\" subject=\"$subject$\" message="\$email_body$\""  Example : If I have user test1 and test2 . Hosts that belong to test1 are sent to test1@gmail.com user and hosts that belong to test2 user are sent to test2@gmail.com. CSV file is getting sent. But now the problem is subject and emailbody are not getting displayed as I added. Its just showing Splunk Result  
@Anonymous  : I am seeing come inconsistency. Once the SPL worked where subject and email body were added as I specified . But sometimes it is not working. Email is getting sent .  Internal logs ... See more...
@Anonymous  : I am seeing come inconsistency. Once the SPL worked where subject and email body were added as I specified . But sometimes it is not working. Email is getting sent .  Internal logs show subject and email body as empty