All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

WORKED! And this is my final query. TY `notable_by_id("*")` | search status_end="false" | where severity IN ("high", "critical") | eval timenow=now() | eval nowstring=strftime(now(), "%H:%M:%S %... See more...
WORKED! And this is my final query. TY `notable_by_id("*")` | search status_end="false" | where severity IN ("high", "critical") | eval timenow=now() | eval nowstring=strftime(now(), "%H:%M:%S %p") | eval diff=now()-_time | eval diff=tostring(diff, "duration") | table _time, diff, rule_name, owner, status_label, "Audit Category", urgency | rename status_label as Status | rename diff as time_difference     
Hi @LinghGroove, no problems: you can copy the license file from the old License Master to the new one. Obviously removing the old one after copy, to avoid to have two installations with the same l... See more...
Hi @LinghGroove, no problems: you can copy the license file from the old License Master to the new one. Obviously removing the old one after copy, to avoid to have two installations with the same license. Ciao. Giuseppe
Hello all, I am managing a splunk architecture with an enterprise license. Sometime during this year i will need to do an architecture migration from my current architecture to a new one eliminating... See more...
Hello all, I am managing a splunk architecture with an enterprise license. Sometime during this year i will need to do an architecture migration from my current architecture to a new one eliminating the old one. Will i be able to just copy the license file in the license manager of the new architecture? Is there some contractual problem with this procedure?  thanks a lot. 
@cedricamouyal - You are in the right direction, use that document to configure the proxy in server.conf and Splunk will use that proxy for every request originated by Splunk including license.   I... See more...
@cedricamouyal - You are in the right direction, use that document to configure the proxy in server.conf and Splunk will use that proxy for every request originated by Splunk including license.   I hope this helps!! Please upvote if it does!!!
@raghul725 - This line below can do whatever addcoltotals can do: | appendpipe [| stats sum(File_Count) by Total_Delivered]   I used Total_Delivered instead of "Total Delivered", as space sometime... See more...
@raghul725 - This line below can do whatever addcoltotals can do: | appendpipe [| stats sum(File_Count) by Total_Delivered]   I used Total_Delivered instead of "Total Delivered", as space sometimes create issues with tokens.   I hope this helps!!
@Roy_9 - I have personally never noticed that before. But it all depends on your PowerShell script, that is getting executed.   I hope this helps!!
@KwonTaeHoon - The Add-on you are referring to is built by Tenable Inc. I would recommend creating support ticket with Tenable to find if they would add support for proxy to the Add-on.   Alternati... See more...
@KwonTaeHoon - The Add-on you are referring to is built by Tenable Inc. I would recommend creating support ticket with Tenable to find if they would add support for proxy to the Add-on.   Alternatively, if you are a developer, you can modify line no. 12 of asm_processor.py file inside the bin folder of the Add-on to add proxy settings like this: (only valid for Add-on version 1.0.2) self.tasm = TenableASM(self.api_key,hostname=hostname, proxies={'http':'http://10.10.1.10:3128','https': 'http://10.10.1.10:1080'}) (Replace your proxy info here)   I hope this helps!!! Kindly upvote if it does!!!
Ah Thank you Richfez, i was unaware the tstats only worked with time related fields. I will have a read through that document you linked and work from there.   Thanks again really appreciate it
@a_kearney - I have not upgraded Splunk UF to the latest version yet. I recommend you create a Splunk support ticket for a quick answer to your question.   I hope this helps!!! Kindly upvote if it... See more...
@a_kearney - I have not upgraded Splunk UF to the latest version yet. I recommend you create a Splunk support ticket for a quick answer to your question.   I hope this helps!!! Kindly upvote if it does!!
@Sishad - I think that's the right approach to work on your requirement.   But alternatively, you could use Unix and Linux Add-on's ps.sh script to get this data without you have to write the scrip... See more...
@Sishad - I think that's the right approach to work on your requirement.   But alternatively, you could use Unix and Linux Add-on's ps.sh script to get this data without you have to write the script yourself. - https://docs.splunk.com/Documentation/AddOns/released/UnixLinux/About - https://splunkbase.splunk.com/app/833   I hope this helps!!! Kindly upvote if it does!!!
Hi @asncari, don't remove props.conf: leave it with  [<your_sourcetype>] TIME_FORMAT = %d/%m/%Y %H:%M:%S Otherwise open a case to Splunk Support, sending them a diag. Ciao. Giuseppe
tstats only works with index-time fields, and those fields are all likely to be search-time. Alternatives in this case to try to improve performance: 1) Report acceleration, which should let you cr... See more...
tstats only works with index-time fields, and those fields are all likely to be search-time. Alternatives in this case to try to improve performance: 1) Report acceleration, which should let you create a saved report that keeps most of the details of the search up-to-date, then you refer to that in your dashboard and it should drop the time *way* down. 2) Or build a data model (if one doesn't exist in the CIM add on already) for this data, and accelerate the data model.  Similar to above in overall speed, but quite a bit different under the hood and more flexible. Both of these (and some other options) are in the docs, well explained, here: https://docs.splunk.com/Documentation/Splunk/9.1.3/Knowledge/Aboutsummaryindexing I hope that gets you started! Happy Splunking!  
Sometimes there are unicode characters (e.g.: \u00e3) and sometimes there are other characters like ', :, #, etc... I don't have an issue with the unicode characters, but occasionally one of the oth... See more...
Sometimes there are unicode characters (e.g.: \u00e3) and sometimes there are other characters like ', :, #, etc... I don't have an issue with the unicode characters, but occasionally one of the other characters is a colon (:) which breaks the new field as I am building it to be colon separated.
I see that you set the 3rd capture group to simply grab all except ".  The problem with that is that sometimes, there is a colon in that field, which you can see I am using replace to colon separate ... See more...
I see that you set the 3rd capture group to simply grab all except ".  The problem with that is that sometimes, there is a colon in that field, which you can see I am using replace to colon separate the new field.  What I'm trying to do is find a way to remove all non-alphanumeric characters in that "section" of the log before running the eval.  Then I could use your solution. I tried extracting that section into a MVfield, then used sed to eliminate all the characters, but wasn't sure how to go further.  I could use your solution, but when there is a colon character (:) then it would definitely break the building of the new field. Thought about doing what you suggested and then using lookahead/lookbehinds to count the number of (:) and then sed anything non-alphanumeric [^a-zA-Z0-9] but wasn't sure how to do go about that either.
What tstats commands have you tried so far? A key thing to remember about tstats is that it can only search for index-time fields or fields in an accelerated datamodel.  Use the walklex command to s... See more...
What tstats commands have you tried so far? A key thing to remember about tstats is that it can only search for index-time fields or fields in an accelerated datamodel.  Use the walklex command to see if the fields you want to use are indexed.
Did you enable HEC?  https://docs.splunk.com/Documentation/Splunk/9.1.3/Data/UseHECfromtheCLI says it's something like   splunk http-event-collector enable -uri https://localhost:8089   (It's unc... See more...
Did you enable HEC?  https://docs.splunk.com/Documentation/Splunk/9.1.3/Data/UseHECfromtheCLI says it's something like   splunk http-event-collector enable -uri https://localhost:8089   (It's unclear from the docs if enabling a specific token enabled the global HEC settings or not, but this definitely will do that)  
Hi, @gcusello  Without the props file it is how we originally had it and that is why I added it. I am going to open a case with Broadcom support because this doesn't make sense. If we can solve it... See more...
Hi, @gcusello  Without the props file it is how we originally had it and that is why I added it. I am going to open a case with Broadcom support because this doesn't make sense. If we can solve it, I will write it here so that it can be of use to other people.
HI, I am trying to create a dashboard which is similar to incident review dashboard, I am looking for users to be able to select time and detection name to display notables per time/rule but also ... See more...
HI, I am trying to create a dashboard which is similar to incident review dashboard, I am looking for users to be able to select time and detection name to display notables per time/rule but also show only fields extracted on CR on | fields command.  I was able to accomplish part of it by merging  index notable with rest api command and then extracting with regex information after | fields in the spl. Which gave me a list of fields available such as  title: Brute Force field_values: _time, created_at, ip, md5, attempts... I need that list of fields within field_values to be shown as actual fields whenever the user selects and specific detection. This will need to be dynamic because there are different fields which will change per detection.   Search:  index=notable | rename search_name as title | search title="*" | join type=left title [ | rest /servicesNS/-/-/saved/searches splunk_server=local | eval disabled=if(disabled=1,"true","false") | search disabled=false actions IN ("*notable*") | rex field=search "\|\s*(fields|table)\s+(?<field_values>.*)" | fields title field_values] I appreciate any help since I have been scratching my head for a couple of weeks now.  Thanks in advance. 
Hi all, im looking to create a dashboard to capture various info on or proxy data. I have a few simple queries index=siem-proxy | top limit=5 cs_method and my other query index=siem-proxy | t... See more...
Hi all, im looking to create a dashboard to capture various info on or proxy data. I have a few simple queries index=siem-proxy | top limit=5 cs_method and my other query index=siem-proxy | top limit=8 dest_port this gets the requests methods such as POST, GET etc. I want to add this to a dashboard but looking to streamline the query first, i tried using tstats but was getting nothing back some I think im getting the syntax wrong. Without streamlining the queries are taking a very long time to run as i have millions of events. Is there a way to put this into a tstats query that I can use as  visualization?   thank you 
Hi Splunk Experts, I am trying to make a health check dashboard for our application.First thing in the list is to monitor status of Linux process. What I am planning is to execute a shell script in... See more...
Hi Splunk Experts, I am trying to make a health check dashboard for our application.First thing in the list is to monitor status of Linux process. What I am planning is to execute a shell script in the server and write output of process status as 0 and 1. Where 0 is running and 1 is down. Then this is written to a log and this logs is being pushed to Splunk. Now my requirement is to create a dashboard which display Service Name and then status as Green or Red. Just wanted to know whether this is right approach or is there any alternative way to achieve the same more efficiently ?