All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@bowesmana  here is the dropdown ALL value = *  in the query  so when selected all it comes as server_*_count      
You can't rename it like that - how does that field exist? Is it actually in the data or is it created somehow. Can you post the dropdown where that field is created?  
The best way to do this is from the CLI. From your license manager server, run: splunk list licenses | egrep '(guid|label|license_hash)' Source: https://docs.splunk.com/Documentation/Splunk/9.4.0/A... See more...
The best way to do this is from the CLI. From your license manager server, run: splunk list licenses | egrep '(guid|label|license_hash)' Source: https://docs.splunk.com/Documentation/Splunk/9.4.0/Admin/LicenserCLIcommands
It is technically correct in what it's telling you in that the max value over the 1 hour for each of the MIPS* values is different, in that it's adding the biggest TRX in the hour + the biggest TRX2 ... See more...
It is technically correct in what it's telling you in that the max value over the 1 hour for each of the MIPS* values is different, in that it's adding the biggest TRX in the hour + the biggest TRX2 in the hour which MAY add values from two different miniutes, whereas the first is the biggest TRX in the minute + the biggest TRX2 in the SAME minute. If you want to add up the max values per minute then you can just stack two timechart commands, the first with span=1m to get the max per minute, then do a second timechart  | timechart span=1h max(*) as * which will then give you the max values in the 1h span.
Hi  i have a field with name  server_*_count. the * is coming from an input dropdown ALL where value is *  how can i rename it to server_ALL_count |rename server_*_count as server_ALL_count it... See more...
Hi  i have a field with name  server_*_count. the * is coming from an input dropdown ALL where value is *  how can i rename it to server_ALL_count |rename server_*_count as server_ALL_count its giving me an error cannot be renamed because of asterix (wildcard)
Hello,   I am trying to add another index column to this table. Currently using the search below. | tstats count where index IN (network) by _time span=1h | rename count as Network_Logs | eval _... See more...
Hello,   I am trying to add another index column to this table. Currently using the search below. | tstats count where index IN (network) by _time span=1h | rename count as Network_Logs | eval _time=strftime(_time, "%m-%d %H:%M") | tstats count where index IN (network, proxy) by _time span=1h | rename count as Network_Logs | eval _time=strftime(_time, "%m-%d %H:%M")   Adding another index such as proxy doesn't seem to work just adds to the total count. Is there anyway to count separate indexes by 1 hour intervals?
Generally, speaking, Splunk processes events one at a time with no concept of "previous" or "next" events.  We can work around that using an aggregation command.  Try this <<your existing search>> `... See more...
Generally, speaking, Splunk processes events one at a time with no concept of "previous" or "next" events.  We can work around that using an aggregation command.  Try this <<your existing search>> ``` Check if the count for all sources of a transaction_id is zero``` | eventstats sum(count) as tx_count by transaction_id | eval Status=if(tx_count=0, "Missing in both sources", "Missing in source " + source) | stats values(Status) as Status by transaction_id
I'm trying to add up 2 values per minute to display the max total value per hour.  This is my search result.  As you can see the first value with the red arrow contains the maximum value at 1:44. ... See more...
I'm trying to add up 2 values per minute to display the max total value per hour.  This is my search result.  As you can see the first value with the red arrow contains the maximum value at 1:44.  If I change the span for 1hour, the Total value changes.  This is not good.  The real max value is the values at 1:44 not the max value of TRX + the max value of TRX2 during the hour.  As you can see in the following exemple the Total value changes from 6594.90 to 6787.11 for 1 hour.: Is there a way to add up the 2 LPARs per minute and then display the highest values per hour without losing the LPAR value?
Hey there! I'm currently struggling to find a way to send the alert sid (commonly found under view results when using the Send Email action in the Alert config) to SOAR. Currently I'm able to send th... See more...
Hey there! I'm currently struggling to find a way to send the alert sid (commonly found under view results when using the Send Email action in the Alert config) to SOAR. Currently I'm able to send the results as multiple artifacts within 1 container via the Grouping checkbox. However if I have a result that holds over 5k+ events, then a container will hold 5k+ artifacts. What's interesting is that in each artifact within the container, there's a variable named _originating_search that has the SID I want to pass. Right now I only want this result sid (_originating_search) but I cant figure out how to do this. Any suggestions welcomed!
The dispatch endpoint launches a new search so that will not get what you're looking for.  Try the search/v2/jobs/export endpoint (https://docs.splunk.com/Documentation/Splunk/9.4.0/RESTREF/RESTsearc... See more...
The dispatch endpoint launches a new search so that will not get what you're looking for.  Try the search/v2/jobs/export endpoint (https://docs.splunk.com/Documentation/Splunk/9.4.0/RESTREF/RESTsearch#search.2Fv2.2Fjobs.2Fexport)
Team, I got stats output as below and I need to compare the field value under column "source" with its count. Ex :- If count of source ABC is 0 and count of source XYZ is 1 then it should print "Mi... See more...
Team, I got stats output as below and I need to compare the field value under column "source" with its count. Ex :- If count of source ABC is 0 and count of source XYZ is 1 then it should print "Missing in Source ABC". If both are 0 then it should print "Missing in both Source ABC and XYZ". stats current output :- transaction_id   source   count 12345                      ABC        0 12345                       XYZ        1 Required table output:- transaction_id          Status 12345                   Missing in source ABC
After a bit of working through this and understanding it, this works perfectly, TY   The below is closer to the actual data I was using.   | makeresults format=csv data="bus_date, appl, tran_coun... See more...
After a bit of working through this and understanding it, this works perfectly, TY   The below is closer to the actual data I was using.   | makeresults format=csv data="bus_date, appl, tran_count 20250122,appl1,336474 20250122,appl2,93 20250122,appl3,6 20250122,appl4,10585 20250123,appl1,2061 20250123,appl2,1075 20250123,appl3,1 20250123,appl4,190 20250124,appl1,6 20250124,appl2,40635 20250124,appl3,786 20250124,appl4,12978 20250125,appl1,140 20250125,appl2,133 20250125,appl3,514 20250125,appl4,125449 20250126,appl1,98 20250126,appl2,5 20250126,appl3,5258 20250126,appl4,3424 20250127,appl1,596 20250127,appl2,1 20250127,appl3,265 20250127,appl4,3 20250128,appl1,38200 20250128,appl2,1320 20250128,appl3,11706 20250128,appl4,114" | fields bus_date, appl, tran_count | streamstats c by appl | sort bus_date | stats latest(bus_date) as bus_date avg(eval(if(c=1, null(), tran_count))) as Avg6day latest(tran_count) as todays_total by appl | eval Variance=todays_total-Avg6day | sort appl - bus_date | table bus_date appl, todays_total, Avg6day, Variance
I am using splunk-sdk in my python code, I want to get latest sid of saved report each time it is refreshed. I tried using saved_search.dispatch() but the sid which I get in output doesn't retrieves... See more...
I am using splunk-sdk in my python code, I want to get latest sid of saved report each time it is refreshed. I tried using saved_search.dispatch() but the sid which I get in output doesn't retrieves result in python throws URLencoded error. Can someone help on this?
Documentation for props.conf is in props.conf.spec.  Documentation of props for Palo Alto is in the add-on.  I've attached props.conf from the TA for your reference.
This resolved it (issue with another app). A similar thing happened to me, the Splunk App for SOAR's configuration page was not loading all buttons on the configuration page.  We discovered that the... See more...
This resolved it (issue with another app). A similar thing happened to me, the Splunk App for SOAR's configuration page was not loading all buttons on the configuration page.  We discovered that there was a passwords.conf file (in another app) that was not set correctly. It was pushed from another Splunk instance (the password was already hashed) which interfered with the passwords.conf in the Splunk App for SOAR. The splunkd error logs were essentially 1:1 which is how I found your post.  If anyone else stumbles upon this post (with similar errors), be sure to check the passwords.conf configurations - even in other apps. Hope this helps!
Unfortunately, the environment we have forces us to run things a little messy. We don't have a box to use as a syslog server, and as such must run as root. Also, the download restriction is more on l... See more...
Unfortunately, the environment we have forces us to run things a little messy. We don't have a box to use as a syslog server, and as such must run as root. Also, the download restriction is more on logging in than what is being downloaded. We are not allowed to log in to anything other than the devices in the environment. Do you know of any documentation on good props.conf settings to use?
First, Splunk indexers should not be used as syslog servers.  They will lose syslog data during restarts and cannot monitor port 514 (unless running as root, which is another no-no).  Instead, use a ... See more...
First, Splunk indexers should not be used as syslog servers.  They will lose syslog data during restarts and cannot monitor port 514 (unless running as root, which is another no-no).  Instead, use a dedicated syslog receiver (syslog-ng or SC4S) and forward data from there to Splunk. Second, Splunk apps are not software so they should not be subject to software download restrictions.  Splunk apps are (mostly) just bundles of config files.  Yes, some contain executable code (usually open Python), but you don't have to download those. That said, you need neither app nor add-on to ingest PA Firewall data.  Open a TCP port on the indexer and point PA to that port.  Without an add-on, Splunk will guess at the how to process the data and may (probably will) guess incorrectly.  You likely will need to define props.conf settings that tell Splunk the best way to onboard events from PA.
If those hosts didn’t contains any other data then just stop UF then remove …/var/lib/splunk/fishbucket (check spelling on your node) directory. Start UF service and it start to indexing everything f... See more...
If those hosts didn’t contains any other data then just stop UF then remove …/var/lib/splunk/fishbucket (check spelling on your node) directory. Start UF service and it start to indexing everything from scratch. Then do this same for other nodes. If those nodes contains also some other events what you cannot reindexing the you must remove those file by file and start reindexing only for those files.  Here is one old post about it https://community.splunk.com/t5/Deployment-Architecture/Use-btprobe-reset-to-re-index-multiple-files/td-p/313186
Hi @Uma.Boppana, Thanks for letting me know. I think the next best step is to contact AppD Support. Please follow the link above to an article that will walk you through it if you're not familiar w... See more...
Hi @Uma.Boppana, Thanks for letting me know. I think the next best step is to contact AppD Support. Please follow the link above to an article that will walk you through it if you're not familiar with it.
This article has instructions for embedding Dashboard Studio JSON in XML dashboard file.