All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@KwonTaeHoon - The Add-on you are referring to is built by Tenable Inc. I would recommend creating support ticket with Tenable to find if they would add support for proxy to the Add-on.   Alternati... See more...
@KwonTaeHoon - The Add-on you are referring to is built by Tenable Inc. I would recommend creating support ticket with Tenable to find if they would add support for proxy to the Add-on.   Alternatively, if you are a developer, you can modify line no. 12 of asm_processor.py file inside the bin folder of the Add-on to add proxy settings like this: (only valid for Add-on version 1.0.2) self.tasm = TenableASM(self.api_key,hostname=hostname, proxies={'http':'http://10.10.1.10:3128','https': 'http://10.10.1.10:1080'}) (Replace your proxy info here)   I hope this helps!!! Kindly upvote if it does!!!
Ah Thank you Richfez, i was unaware the tstats only worked with time related fields. I will have a read through that document you linked and work from there.   Thanks again really appreciate it
@a_kearney - I have not upgraded Splunk UF to the latest version yet. I recommend you create a Splunk support ticket for a quick answer to your question.   I hope this helps!!! Kindly upvote if it... See more...
@a_kearney - I have not upgraded Splunk UF to the latest version yet. I recommend you create a Splunk support ticket for a quick answer to your question.   I hope this helps!!! Kindly upvote if it does!!
@Sishad - I think that's the right approach to work on your requirement.   But alternatively, you could use Unix and Linux Add-on's ps.sh script to get this data without you have to write the scrip... See more...
@Sishad - I think that's the right approach to work on your requirement.   But alternatively, you could use Unix and Linux Add-on's ps.sh script to get this data without you have to write the script yourself. - https://docs.splunk.com/Documentation/AddOns/released/UnixLinux/About - https://splunkbase.splunk.com/app/833   I hope this helps!!! Kindly upvote if it does!!!
Hi @asncari, don't remove props.conf: leave it with  [<your_sourcetype>] TIME_FORMAT = %d/%m/%Y %H:%M:%S Otherwise open a case to Splunk Support, sending them a diag. Ciao. Giuseppe
tstats only works with index-time fields, and those fields are all likely to be search-time. Alternatives in this case to try to improve performance: 1) Report acceleration, which should let you cr... See more...
tstats only works with index-time fields, and those fields are all likely to be search-time. Alternatives in this case to try to improve performance: 1) Report acceleration, which should let you create a saved report that keeps most of the details of the search up-to-date, then you refer to that in your dashboard and it should drop the time *way* down. 2) Or build a data model (if one doesn't exist in the CIM add on already) for this data, and accelerate the data model.  Similar to above in overall speed, but quite a bit different under the hood and more flexible. Both of these (and some other options) are in the docs, well explained, here: https://docs.splunk.com/Documentation/Splunk/9.1.3/Knowledge/Aboutsummaryindexing I hope that gets you started! Happy Splunking!  
Sometimes there are unicode characters (e.g.: \u00e3) and sometimes there are other characters like ', :, #, etc... I don't have an issue with the unicode characters, but occasionally one of the oth... See more...
Sometimes there are unicode characters (e.g.: \u00e3) and sometimes there are other characters like ', :, #, etc... I don't have an issue with the unicode characters, but occasionally one of the other characters is a colon (:) which breaks the new field as I am building it to be colon separated.
I see that you set the 3rd capture group to simply grab all except ".  The problem with that is that sometimes, there is a colon in that field, which you can see I am using replace to colon separate ... See more...
I see that you set the 3rd capture group to simply grab all except ".  The problem with that is that sometimes, there is a colon in that field, which you can see I am using replace to colon separate the new field.  What I'm trying to do is find a way to remove all non-alphanumeric characters in that "section" of the log before running the eval.  Then I could use your solution. I tried extracting that section into a MVfield, then used sed to eliminate all the characters, but wasn't sure how to go further.  I could use your solution, but when there is a colon character (:) then it would definitely break the building of the new field. Thought about doing what you suggested and then using lookahead/lookbehinds to count the number of (:) and then sed anything non-alphanumeric [^a-zA-Z0-9] but wasn't sure how to do go about that either.
What tstats commands have you tried so far? A key thing to remember about tstats is that it can only search for index-time fields or fields in an accelerated datamodel.  Use the walklex command to s... See more...
What tstats commands have you tried so far? A key thing to remember about tstats is that it can only search for index-time fields or fields in an accelerated datamodel.  Use the walklex command to see if the fields you want to use are indexed.
Did you enable HEC?  https://docs.splunk.com/Documentation/Splunk/9.1.3/Data/UseHECfromtheCLI says it's something like   splunk http-event-collector enable -uri https://localhost:8089   (It's unc... See more...
Did you enable HEC?  https://docs.splunk.com/Documentation/Splunk/9.1.3/Data/UseHECfromtheCLI says it's something like   splunk http-event-collector enable -uri https://localhost:8089   (It's unclear from the docs if enabling a specific token enabled the global HEC settings or not, but this definitely will do that)  
Hi, @gcusello  Without the props file it is how we originally had it and that is why I added it. I am going to open a case with Broadcom support because this doesn't make sense. If we can solve it... See more...
Hi, @gcusello  Without the props file it is how we originally had it and that is why I added it. I am going to open a case with Broadcom support because this doesn't make sense. If we can solve it, I will write it here so that it can be of use to other people.
HI, I am trying to create a dashboard which is similar to incident review dashboard, I am looking for users to be able to select time and detection name to display notables per time/rule but also ... See more...
HI, I am trying to create a dashboard which is similar to incident review dashboard, I am looking for users to be able to select time and detection name to display notables per time/rule but also show only fields extracted on CR on | fields command.  I was able to accomplish part of it by merging  index notable with rest api command and then extracting with regex information after | fields in the spl. Which gave me a list of fields available such as  title: Brute Force field_values: _time, created_at, ip, md5, attempts... I need that list of fields within field_values to be shown as actual fields whenever the user selects and specific detection. This will need to be dynamic because there are different fields which will change per detection.   Search:  index=notable | rename search_name as title | search title="*" | join type=left title [ | rest /servicesNS/-/-/saved/searches splunk_server=local | eval disabled=if(disabled=1,"true","false") | search disabled=false actions IN ("*notable*") | rex field=search "\|\s*(fields|table)\s+(?<field_values>.*)" | fields title field_values] I appreciate any help since I have been scratching my head for a couple of weeks now.  Thanks in advance. 
Hi all, im looking to create a dashboard to capture various info on or proxy data. I have a few simple queries index=siem-proxy | top limit=5 cs_method and my other query index=siem-proxy | t... See more...
Hi all, im looking to create a dashboard to capture various info on or proxy data. I have a few simple queries index=siem-proxy | top limit=5 cs_method and my other query index=siem-proxy | top limit=8 dest_port this gets the requests methods such as POST, GET etc. I want to add this to a dashboard but looking to streamline the query first, i tried using tstats but was getting nothing back some I think im getting the syntax wrong. Without streamlining the queries are taking a very long time to run as i have millions of events. Is there a way to put this into a tstats query that I can use as  visualization?   thank you 
Hi Splunk Experts, I am trying to make a health check dashboard for our application.First thing in the list is to monitor status of Linux process. What I am planning is to execute a shell script in... See more...
Hi Splunk Experts, I am trying to make a health check dashboard for our application.First thing in the list is to monitor status of Linux process. What I am planning is to execute a shell script in the server and write output of process status as 0 and 1. Where 0 is running and 1 is down. Then this is written to a log and this logs is being pushed to Splunk. Now my requirement is to create a dashboard which display Service Name and then status as Green or Red. Just wanted to know whether this is right approach or is there any alternative way to achieve the same more efficiently ? 
Just checking that the correlation searches are the same across the env and they all have the "Create Notable" and "Create Mission Control Incident" Adaptive responses?
I was expecting your props.conf to have INDEXED_EXTRACTIONS = CSV You are also using a TIME_PREFIX instead of TIMESTAMP_FIELDS...  And you have a PREAMBLE_REGEX set, which looks like it's set to t... See more...
I was expecting your props.conf to have INDEXED_EXTRACTIONS = CSV You are also using a TIME_PREFIX instead of TIMESTAMP_FIELDS...  And you have a PREAMBLE_REGEX set, which looks like it's set to the first fieldname in the header (which would remove the header line) though you don't provide FIELD_NAMES... Putting that all together, it looks like you aren't really treating those files as CSV files.  I'm not sure what's going on, but I wonder if it would work right if you treated them as CSV. If that doesn't help, it'd be useful to see the contents of a file that doesn't work, and one that does. 
OK. What is "not working"? Since this is UDP-based, the network-level diagnostics is relatively hard with normal tools, you should rather configure it, try to use it and sniff the network traffic to... See more...
OK. What is "not working"? Since this is UDP-based, the network-level diagnostics is relatively hard with normal tools, you should rather configure it, try to use it and sniff the network traffic to see if anything is being sent.
(search1) OR (search2) | stats values(*) as * by Field1 If they are bigger complex searches, you'd need to use append instead of simple OR for conditions but then you have to watch for limits for s... See more...
(search1) OR (search2) | stats values(*) as * by Field1 If they are bigger complex searches, you'd need to use append instead of simple OR for conditions but then you have to watch for limits for subsearches.
I assume the OP wants a bit more than that. You have two different log sources. One is a log from CyberArk PAS in which you have an event showing a connection from - let's say - user1 to account ad... See more...
I assume the OP wants a bit more than that. You have two different log sources. One is a log from CyberArk PAS in which you have an event showing a connection from - let's say - user1 to account admin1 on server1. And then you have a normal AD log showing some sensitive action. And the idea is to pull the user1 from PAS log and insert it into the AD log. The problem here, and I'm speaking not as Splunk user but as a certified CyberArk PAS admin is that there doesn't have to be a common field to join those two events because you can, for example have a connection initiated to a server's IP address but the AD logs would only contain the server's hostname. So it's not that easy due to the nature of the events. In some specific cases you probably can do that, but there is no general way for this. OTOH, completely regardless of Splunk, you can do reporting within the PAS itself and can use the PTA solution available probably with your PAS license entitlement to generate alerts in case of activity you want to find but that's a completely different story - for another forum.
So, why not use tostring with duration as I suggested?