All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Just checking that the correlation searches are the same across the env and they all have the "Create Notable" and "Create Mission Control Incident" Adaptive responses?
I was expecting your props.conf to have INDEXED_EXTRACTIONS = CSV You are also using a TIME_PREFIX instead of TIMESTAMP_FIELDS...  And you have a PREAMBLE_REGEX set, which looks like it's set to t... See more...
I was expecting your props.conf to have INDEXED_EXTRACTIONS = CSV You are also using a TIME_PREFIX instead of TIMESTAMP_FIELDS...  And you have a PREAMBLE_REGEX set, which looks like it's set to the first fieldname in the header (which would remove the header line) though you don't provide FIELD_NAMES... Putting that all together, it looks like you aren't really treating those files as CSV files.  I'm not sure what's going on, but I wonder if it would work right if you treated them as CSV. If that doesn't help, it'd be useful to see the contents of a file that doesn't work, and one that does. 
OK. What is "not working"? Since this is UDP-based, the network-level diagnostics is relatively hard with normal tools, you should rather configure it, try to use it and sniff the network traffic to... See more...
OK. What is "not working"? Since this is UDP-based, the network-level diagnostics is relatively hard with normal tools, you should rather configure it, try to use it and sniff the network traffic to see if anything is being sent.
(search1) OR (search2) | stats values(*) as * by Field1 If they are bigger complex searches, you'd need to use append instead of simple OR for conditions but then you have to watch for limits for s... See more...
(search1) OR (search2) | stats values(*) as * by Field1 If they are bigger complex searches, you'd need to use append instead of simple OR for conditions but then you have to watch for limits for subsearches.
I assume the OP wants a bit more than that. You have two different log sources. One is a log from CyberArk PAS in which you have an event showing a connection from - let's say - user1 to account ad... See more...
I assume the OP wants a bit more than that. You have two different log sources. One is a log from CyberArk PAS in which you have an event showing a connection from - let's say - user1 to account admin1 on server1. And then you have a normal AD log showing some sensitive action. And the idea is to pull the user1 from PAS log and insert it into the AD log. The problem here, and I'm speaking not as Splunk user but as a certified CyberArk PAS admin is that there doesn't have to be a common field to join those two events because you can, for example have a connection initiated to a server's IP address but the AD logs would only contain the server's hostname. So it's not that easy due to the nature of the events. In some specific cases you probably can do that, but there is no general way for this. OTOH, completely regardless of Splunk, you can do reporting within the PAS itself and can use the PTA solution available probably with your PAS license entitlement to generate alerts in case of activity you want to find but that's a completely different story - for another forum.
So, why not use tostring with duration as I suggested?
More like this index=index1 OR index=index2 | eval Result=coalesce(field1, field2) | stats values(*) as * by Result
hi @maverick27 , you have to expend my search: index=index1 OR index=index2 | eval Result=coalesce(field1, field2) | table Result DEPT UID REGION Ciao. Giuseppe
Sounds like you need to raise it as a new idea unless someone has already raised it, in which case up-vote it. So, either an option to set the series default colours by app or by extending the charti... See more...
Sounds like you need to raise it as a new idea unless someone has already raised it, in which case up-vote it. So, either an option to set the series default colours by app or by extending the charting.seriesColors option to cover tables.
hi @asncari, there's no reason for this behavior! Please, make a last try: remove TIME_PREFIX, restart Splunk and try again. Ciao. Giuseppe
If you are using Forwarder Monitoring in the Monitoring Console, you can find all forwarders that were sending logs to your environment (from the last inventory reset point).
Have you tried the -dedup option for the fill_summary_index.py? Run your fill_summary_index.py script with '-h', like $ splunk cmd python fill_summary_index.py -h There's all sorts of options in t... See more...
Have you tried the -dedup option for the fill_summary_index.py? Run your fill_summary_index.py script with '-h', like $ splunk cmd python fill_summary_index.py -h There's all sorts of options in there, including dedup and timeframe changes.  It might be useful to spend a few minutes reading that carefully. You may also find it useful to review the fine docs on this: https://docs.splunk.com/Documentation/Splunk/latest/Knowledge/Managesummaryindexgapsandoverlaps   Happy Splunking! 
Hi Giuseppe, We have configured the props.conf with the sourcetype and the behavior is the same. Thx Giuseppe.
Thank you for your reply. I've sort of gotten that far, but where I'm really struggling is trying to make each index their percentage of their respective thruput, sorry if I didn't clarify that in th... See more...
Thank you for your reply. I've sort of gotten that far, but where I'm really struggling is trying to make each index their percentage of their respective thruput, sorry if I didn't clarify that in the question. 
A subsearch seems like the right answer here.  A subsearch is enclosed in [] brackets inside your main search, runs first, and the results of that subsearch get fed back into the main search as sear... See more...
A subsearch seems like the right answer here.  A subsearch is enclosed in [] brackets inside your main search, runs first, and the results of that subsearch get fed back into the main search as search terms. So you have two searches here, the search that finds the cyberark data, and the one that finds the AD data.  You didn't provide either of those separate searches, so I'm just making up some pseudosearches for those. Let's say your cyberark search is something like index=cyberark action=doAnImportantThing | dedup user Which would return a short list of users involved in ... well, whatever doAnImportantThing is in this case.  Let's say 'Mary" and "John" So, at its simplest, you just use that as your subsearch. index=ad [search index=cyberark action=doAnImportantThing | dedup user] Don't forget to add "search" to the subsearch, it's automatic in the main search, but not anywhere else. So your subsearch runs, returns its data formatted like (( user=Mary ) OR (user=John) ), which means your outer search ends up being index=ad ( ( user=Mary ) OR (user=John) ) And there you go. You'll want to refer to here for more and more examples: https://docs.splunk.com/Documentation/Splunk/9.1.3/Search/Aboutsubsearches Some other comnmands/stuff to know - 'earliest=...' and 'latest=...', and also check out the 'format' command which can alter how the subsearch get returned (to do things like AND, or whatever else if you want).
I'll test it and tell you. Thx Giuseppe
Hi @asncari, probaly the options aren't applied to your sourcetype, please add them in a sourcetype, not to default, in props.conf: [your_sourcetype] TIME_PREFIX = ^ TIME_FORMAT = %d/%m/%Y %H:%M:%S... See more...
Hi @asncari, probaly the options aren't applied to your sourcetype, please add them in a sourcetype, not to default, in props.conf: [your_sourcetype] TIME_PREFIX = ^ TIME_FORMAT = %d/%m/%Y %H:%M:%S Ciao. Giuseppe
Good afternoon, I have a very strange problem. I have a log with these 2 events: 01/02/2024 13:06:16 - SOLISP1 IP: 10.229.87.80 USER-AGENT: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko... See more...
Good afternoon, I have a very strange problem. I have a log with these 2 events: 01/02/2024 13:06:16 - SOLISP1 IP: 10.229.87.80 USER-AGENT: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Firefox/78.0 01/02/2024 13:00:54 - GGCARO3 IP: 10.229.87.80 USER-AGENT: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Firefox/78.0 The date format in the event is dd/mm/yyyy Well, splunk indexes one of them in January and another in February. We have tried editing the props file as follows: [default] TIME_PREFIX = ^ TIME_FORMAT = %d/%m/%Y %H:%M:%S Anyone know what might be happening?
Exactly, This is my search `notable_by_id("*")` | search status_end="false" | where severity IN ("high", "critical") | eval time_difference=tostring(now() - _time) | eval time_difference = strftime... See more...
Exactly, This is my search `notable_by_id("*")` | search status_end="false" | where severity IN ("high", "critical") | eval time_difference=tostring(now() - _time) | eval time_difference = strftime(time_difference, "%H:%M:%S") | table _time, time_difference, rule_name, owner, status_label, "Audit Category", urgency | rename status_label as Status
May be I wasn't clear with my requirement. Apologies guys! Let me give you an example as to what I'm trying to do.  1st search contains the following data: Field1 DEPT UID 1 Accounts A... See more...
May be I wasn't clear with my requirement. Apologies guys! Let me give you an example as to what I'm trying to do.  1st search contains the following data: Field1 DEPT UID 1 Accounts AA 3 HR CC 5 Ops EE 7 Tech GG 9 Ops II 10 Tech JJ 11 HR KK   2nd search contains the following data: Field2 REGION 2 NA 4 TY 6 HK 8 AS 10 EU 11 AS   Now, I need to get common as well as disctinct rows from both the tables as shown below: Result DEPT UID REGION 1 Accounts AA   2     NA 3 HR CC   4     TY 5 Ops EE   6     HK 7 Tech GG   8     AS 9 Ops II   10 Tech JJ EU 11 HR KK AS