All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@meshorer that's fine! Download the app, look in the inputs.conf file and you will have the path to all log files it monitors.  I gave you the name of the Daemon so just add ".log" to the end and ... See more...
@meshorer that's fine! Download the app, look in the inputs.conf file and you will have the path to all log files it monitors.  I gave you the name of the Daemon so just add ".log" to the end and that's the log you need to monitor. I would assume the other SIEM has an app/parser/collector/something already for Linux as the only thing the Splunk App for SOAR won't monitor is the OS performance metrics.   
Hi @phanTom  the thing is that I am looking to forward the logs to an external siem which is not splunk (so this app won’t be helpful for my situation) This is done with rsyslog that is why I want... See more...
Hi @phanTom  the thing is that I am looking to forward the logs to an external siem which is not splunk (so this app won’t be helpful for my situation) This is done with rsyslog that is why I want to identify the relevant logs for me
There are a number of event codes that have static descriptions of the event in each iteration of the event. This page shows how to trim off the event descriptions on ingest. This can save a lot of d... See more...
There are a number of event codes that have static descriptions of the event in each iteration of the event. This page shows how to trim off the event descriptions on ingest. This can save a lot of data. https://docs.splunk.com/Documentation/WindowsAddOn/latest/User/Configuration
We are on splunk platform .   We have server reporting to on prem Deployment servers and outputs configured to send cloud indexers .
I would like to allow list a url from my dashboards so that no more redirection warnings pop up.  Per the documentation, I can do this by editing web-features.conf on my SHs. What would be the prope... See more...
I would like to allow list a url from my dashboards so that no more redirection warnings pop up.  Per the documentation, I can do this by editing web-features.conf on my SHs. What would be the proper way to push this as a bundle? I tried creating and modifying web-features.conf  in an app context on the SHC Deployer (../shcluster/apps/myapp/default/web-features.conf) directory but I still got the pop up (yes, I restarted the SHs). After using "apply shcluster-bundle",  I used btool AND show config to verify the config changes appeared on the SHs. No dice. If I modify web-features.conf directly on the SHs (../etc/system/local/web-features.conf), it works perfectly. Thank you! my edited web-features.conf below: [feature:dashboards_csp] dashboards_trusted_domain.domain1 = *myurl.com  
I fixed the issue by decreasing the "chunk_limit_size".
how to convert below json array to table {   "Group10": {     "owner": "Abishek Kasetty",     "fail": 2,     "total": 12,     "agile_team": "Punchout_ReRun",     "test": "",     "pass": 6,  ... See more...
how to convert below json array to table {   "Group10": {     "owner": "Abishek Kasetty",     "fail": 2,     "total": 12,     "agile_team": "Punchout_ReRun",     "test": "",     "pass": 6,     "report": "",     "executed_on": "Mon Oct 23 03:10:48 EDT 2023",     "skip": 0,     "si_no": "10"   },   "Group09": {     "owner": "Lavanya Kavuru",     "fail": 45,     "total": 190,     "agile_team": "Hawks_ReRun",     "test": "",     "pass": 42,     "report": "",     "executed_on": "Sun Oct 22 02:57:43 EDT 2023",     "skip": 0,     "si_no": "09"   } } Expected Output ________________________  ________________________  ________________________ agile_team                                              pass                                                       fail ________________________  ________________________  ________________________ Hawks_ReRun                                           42                                                      45
I think you can do that with the appendpipe command, which processes the current results and adds new results to bottom.   | stats values(company), avg(Score) as AvgScore by ip | appendpipe [ stats... See more...
I think you can do that with the appendpipe command, which processes the current results and adds new results to bottom.   | stats values(company), avg(Score) as AvgScore by ip | appendpipe [ stats avg(AvgScore) as Average, median(AvgScore) as Median, max(AvgScore) as Max by company ]    
I know this is quite a late response, but you should be able to accomplish this with using the "Filter using Regex". Select "host" for Source Field In "Drop Events Matching Regular Expression", ent... See more...
I know this is quite a late response, but you should be able to accomplish this with using the "Filter using Regex". Select "host" for Source Field In "Drop Events Matching Regular Expression", enter ^foo- That will set it so any events with the host field value, that starts with foo- will be dropped. 
Just ran into this issue myself.  In my case we found a handful of UF's that had corrupted PATH statements.  Verify you have a correct system path by executing the following Powershell cmd-let $env:... See more...
Just ran into this issue myself.  In my case we found a handful of UF's that had corrupted PATH statements.  Verify you have a correct system path by executing the following Powershell cmd-let $env:path If your path statement does not contain the following entries, chances are this is why you are receiving the . C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;C:\WINDOWS\System32\WindowsPowerShell\v1.0\  
".84" is correctly interpreted as ".840".  Zero padding is always on the side away from the decimal.  The difference between the two timestamps is 892ms. If the application reporting the event inten... See more...
".84" is correctly interpreted as ".840".  Zero padding is always on the side away from the decimal.  The difference between the two timestamps is 892ms. If the application reporting the event intended the timestamp to be "14.084" then it should be corrected.
Hello, I tried your suggestion and it worked fine. I am accepting this as a solution Can you also suggest how to put the average, median and max on the bottom of the table? Thank you again Below ... See more...
Hello, I tried your suggestion and it worked fine. I am accepting this as a solution Can you also suggest how to put the average, median and max on the bottom of the table? Thank you again Below is the example:   company ip AvgScore CompanyA ip1 1 CompanyA ip2 3 CompanyA ip3 4   Average 2.7   Median 3   Max 4
Hi Team, We are observing discrepancy in calculation when the timestamp is less the 100ms. Example: Response time: “2023-10-23 14:46:14.84” Request time: “2023-10-23 14:46:13.948”   When “Respo... See more...
Hi Team, We are observing discrepancy in calculation when the timestamp is less the 100ms. Example: Response time: “2023-10-23 14:46:14.84” Request time: “2023-10-23 14:46:13.948”   When “Response time – Request time” value should be “136ms” but in Splunk it showing as “890ms”.   While calculating Splunk is considering inbound value as ““2023-10-23 14:46:14.840ms” instead of “.84ms” as its in 2 digits. So, is there any possibility to resolve this discrepancy from the Splunk query level or .conf level.    Regards, Siva.
Are you interested in leaving out dest_domain values that don't have high counts?  A real simple way to approach it is to "pre-count" the dest_domain using eventstats, and limit just those that had m... See more...
Are you interested in leaving out dest_domain values that don't have high counts?  A real simple way to approach it is to "pre-count" the dest_domain using eventstats, and limit just those that had more than a particular threshold (in this case 100) with the where command:   index= <Splunk query> | eventstats count by sourcetype | where count>100 | timechart span=15m count by dest_domain usenull=f useother=f | head 15   Also, when you think about how Splunk is running these commands, you might visualize it running these commands over and over your data...like several for-loops one right after another.  That's what it does.  That's what it is optimized for...it's a bit counterintuitive compared to databases where you are trying to limit full-scans of things.  The distributed architecture of Splunk is built for this.
The old way of just running a python script as an alert action was deprecated a while back.  It was a really simple, "run script and here's the search data."  That way is old and busted...so it is no... See more...
The old way of just running a python script as an alert action was deprecated a while back.  It was a really simple, "run script and here's the search data."  That way is old and busted...so it is not around anymore.   The new hotness is Custom Alert Actions.  You can create one that runs your python.  There's some more setup/configuration so it is registered with the system and more effort goes into packaging it up as an official configuration in Splunk (official as in you made it for your environment...). Here is a previous discussion on this topic, too.
Three years later and this worked! Thanks!!
To be pedantic, Splunk doesn't have "columns" in the DBA sense.  We call them "fields". The head command returns all fields in the first n results.  The fields to return can be controlled with the f... See more...
To be pedantic, Splunk doesn't have "columns" in the DBA sense.  We call them "fields". The head command returns all fields in the first n results.  The fields to return can be controlled with the fields command. index= <Splunk query> | fields _time column1 column2 ... column15 | timechart span=15m count by dest_domain usenull=f useother=f | head 15 In this case, however, head is unnecessary because timechart can do the same thing. index= <Splunk query> | fields _time column1 column2 ... column15 | timechart limit=15 span=15m count by dest_domain usenull=f useother=f  
Hi @Yann.Buccellato, Let me share this with the Docs team and see if we can get this cleared up!
Please give us your definition of "noise". Do none of your other questions on the same topic address this, too? Have you considered using Ingest Actions to avoid indexing unwanted data?  See https:... See more...
Please give us your definition of "noise". Do none of your other questions on the same topic address this, too? Have you considered using Ingest Actions to avoid indexing unwanted data?  See https://lantern.splunk.com/Splunk_Platform/Product_Tips/Data_Management/Using_ingest_actions_in_Splunk_Enterprise and https://docs.splunk.com/Documentation/Splunk/9.1.1/Data/DataIngest
Hi All,   Splunk "head" command by default retrieves top 10 columns and 10 results. may i know if we can control the number of columns to be retrieved. index= <Splunk query>| timechart span=15m co... See more...
Hi All,   Splunk "head" command by default retrieves top 10 columns and 10 results. may i know if we can control the number of columns to be retrieved. index= <Splunk query>| timechart span=15m count by dest_domain usenull=f useother=f | head 15 e.g. _time|column1|...............................................................|coulmn15 1 2 - - 15