All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Splunk Enterpriseのアップグレード手順の中に自動起動設定を無効化するとありますが、どのような理由で自動起動設定の無効化が必要なのでしょうか。
I dont get why the uploaded data is displayed like this. I am unable to create dashboards as it is not identifying all the data available in the file.  
I have data that looks something like this, coming in as JSON: time, application, feature, username, hostname The problem is that username and hostname are nested arrays, like this:     { a... See more...
I have data that looks something like this, coming in as JSON: time, application, feature, username, hostname The problem is that username and hostname are nested arrays, like this:     { application: app1 feature: feature1 timestamp: 01/29/2025 23:02:00 +0000 users: [ { userhost: client1 username: user1 } { userhost: client2 username: user2 } ] }     and when the event shows up in splunk, userhost and username are converted to multi-value fields. _time application feature users{}.username users{}.userhost 01/29/2025 23:02:00 app1 feature1 user1 user2 client1 client2   I need an SPL method to convert these into individual events for the purposes of a search, so that I can perform ldap lookups on each hostname. mvexpand only works on one field at a time and doesn't recognize users or users{} as valid input, which loses the relationship between user1:client1 and user2:client2. How can I convert both arrays to individual events by array index, so that I preserve the relationship between username and hostname, like this: _time application feature users{}.username users{}.userhost 01/29/2025 23:02:00 app1 feature1 user1 client1 01/29/2025 23:02:00 app1 feature1 user2 client2
I'm trying to get a table or heatmap of a count of incidents that occur by day and hour....My results make sense, except I'm only getting hours 8 through 20. I know incidents occur round the clock. S... See more...
I'm trying to get a table or heatmap of a count of incidents that occur by day and hour....My results make sense, except I'm only getting hours 8 through 20. I know incidents occur round the clock. So, I should be seeing a count for each hour. Any suggestions?     | eval date_hour=strftime(_time, "%H") | eval date_wday=strftime(_time, "%A") | chart dc(RMI_MastIncNumb) AS incidents over date_wday by date_hour useother=f | eval wd=lower(date_wday) | eval sort_field=case(wd=="monday",2, wd=="tuesday",3, wd=="wednesday",4, wd=="thursday",5, wd=="friday",6, wd=="saturday",7, wd="sunday", 1) | sort sort_field | fields - sort_field wd      
Hi  i have a field with name  server_*_count. the * is coming from an input dropdown ALL where value is *  how can i rename it to server_ALL_count |rename server_*_count as server_ALL_count it... See more...
Hi  i have a field with name  server_*_count. the * is coming from an input dropdown ALL where value is *  how can i rename it to server_ALL_count |rename server_*_count as server_ALL_count its giving me an error cannot be renamed because of asterix (wildcard)
Hello,   I am trying to add another index column to this table. Currently using the search below. | tstats count where index IN (network) by _time span=1h | rename count as Network_Logs | eval _... See more...
Hello,   I am trying to add another index column to this table. Currently using the search below. | tstats count where index IN (network) by _time span=1h | rename count as Network_Logs | eval _time=strftime(_time, "%m-%d %H:%M") | tstats count where index IN (network, proxy) by _time span=1h | rename count as Network_Logs | eval _time=strftime(_time, "%m-%d %H:%M")   Adding another index such as proxy doesn't seem to work just adds to the total count. Is there anyway to count separate indexes by 1 hour intervals?
I'm trying to add up 2 values per minute to display the max total value per hour.  This is my search result.  As you can see the first value with the red arrow contains the maximum value at 1:44. ... See more...
I'm trying to add up 2 values per minute to display the max total value per hour.  This is my search result.  As you can see the first value with the red arrow contains the maximum value at 1:44.  If I change the span for 1hour, the Total value changes.  This is not good.  The real max value is the values at 1:44 not the max value of TRX + the max value of TRX2 during the hour.  As you can see in the following exemple the Total value changes from 6594.90 to 6787.11 for 1 hour.: Is there a way to add up the 2 LPARs per minute and then display the highest values per hour without losing the LPAR value?
Hey there! I'm currently struggling to find a way to send the alert sid (commonly found under view results when using the Send Email action in the Alert config) to SOAR. Currently I'm able to send th... See more...
Hey there! I'm currently struggling to find a way to send the alert sid (commonly found under view results when using the Send Email action in the Alert config) to SOAR. Currently I'm able to send the results as multiple artifacts within 1 container via the Grouping checkbox. However if I have a result that holds over 5k+ events, then a container will hold 5k+ artifacts. What's interesting is that in each artifact within the container, there's a variable named _originating_search that has the SID I want to pass. Right now I only want this result sid (_originating_search) but I cant figure out how to do this. Any suggestions welcomed!
Team, I got stats output as below and I need to compare the field value under column "source" with its count. Ex :- If count of source ABC is 0 and count of source XYZ is 1 then it should print "Mi... See more...
Team, I got stats output as below and I need to compare the field value under column "source" with its count. Ex :- If count of source ABC is 0 and count of source XYZ is 1 then it should print "Missing in Source ABC". If both are 0 then it should print "Missing in both Source ABC and XYZ". stats current output :- transaction_id   source   count 12345                      ABC        0 12345                       XYZ        1 Required table output:- transaction_id          Status 12345                   Missing in source ABC
I am using splunk-sdk in my python code, I want to get latest sid of saved report each time it is refreshed. I tried using saved_search.dispatch() but the sid which I get in output doesn't retrieves... See more...
I am using splunk-sdk in my python code, I want to get latest sid of saved report each time it is refreshed. I tried using saved_search.dispatch() but the sid which I get in output doesn't retrieves result in python throws URLencoded error. Can someone help on this?
Hello, I have a Palo Alto Firewall in my environment and would like to set it up to forward logs to a Splunk indexer which is also the syslog server. The environment is small and we are not allowed t... See more...
Hello, I have a Palo Alto Firewall in my environment and would like to set it up to forward logs to a Splunk indexer which is also the syslog server. The environment is small and we are not allowed to log in to anything to download software, so using the App or Add-on isn't possible. Is there a way to directly send my Palo logs to the Splunk indexer?
Not sure the best way to go about this. We had an index that originally had a 30 day retention that they wanted extended to 1 year after it had been running for awhile. It was also originally setup t... See more...
Not sure the best way to go about this. We had an index that originally had a 30 day retention that they wanted extended to 1 year after it had been running for awhile. It was also originally setup to to collect new data going forward but they now also want all the historical data pulled into splunk as this was replacing a different tool. How do I restore the already retentioned data and collect the old data that was originally outside of the window of what we wanted pulled in? I've already adjusted the retention period and removed the ignoreolder=7d from the config. Am I just better off rebuilding the whole thing from scratch?   [monitor://<Path>] index=<Appname> sourcetype=chr crcSalt = <SOURCE> [<Appname>] homePath = volume:hot/$_index_name coldPath = volume:cold/$_index_name summaryHomePath = volume:summaries/$_index_name thawedPath = /opt/splunk/data/thawed/$_index_name enableTsidxReduction = false maxDataSize = auto_high_volume frozenTimePeriodInSecs = 31536000
We had a Splunk instance on-premise that was storing our colddb data from Splunk on to and isilon storage, we have moved to the cloud and want to move some of our indexed data in the colddb on the is... See more...
We had a Splunk instance on-premise that was storing our colddb data from Splunk on to and isilon storage, we have moved to the cloud and want to move some of our indexed data in the colddb on the isilon to an aws instance we have setup in the cloud, does anyone have any suggestions on how to go about moving the data to the aws. We would just like to have the rawdata and not all the metadata from  the apps. 
Is there a powershell command to find out if splunk is indeed forwarding logs to splunk console? I can check if agent is installed andrunning but how about forwarding?   which log should i check for?
Hey all,   I'm looking to trail splunk cloud and more specifically the Data Manager feature.  I've successfully logged in to my trial instance (Version: 9.3.2408.107 Experience: Classic) but "Data ... See more...
Hey all,   I'm looking to trail splunk cloud and more specifically the Data Manager feature.  I've successfully logged in to my trial instance (Version: 9.3.2408.107 Experience: Classic) but "Data Manager" isn't present as an App in the list.  Is this not something I can use with a trial version?  
Search peer <hostname> has the following message: Unable to initialize modular input ssg_subscription_modular_input defined in the app splunk secure gateway introspecting scheme-ssg_subscription_modu... See more...
Search peer <hostname> has the following message: Unable to initialize modular input ssg_subscription_modular_input defined in the app splunk secure gateway introspecting scheme-ssg_subscription_modular_input script running failed.    These are new 9.3 builds added to the cluster. I would appreciate any insight.   thanks,
I have an IIS server that is sending logs to splunk, and the logs are saved in w3c format. but I found that logs are save in UTC time format. and only IIS format can save logs in local time but there... See more...
I have an IIS server that is sending logs to splunk, and the logs are saved in w3c format. but I found that logs are save in UTC time format. and only IIS format can save logs in local time but there is no parser for IIs.   if someone have integrated IIS do let me know
Hi  Is it possible to create a workflow like below in Splunk.  We have 5 jobs running everyday and the start/end time with status is captured in the Splunk logs.  We want to create a workflow ... See more...
Hi  Is it possible to create a workflow like below in Splunk.  We have 5 jobs running everyday and the start/end time with status is captured in the Splunk logs.  We want to create a workflow as below using the start/end time and status of the jobs:     
This isn't so much a question as a comment. I found that time config to be incorrect.  My logs start like this: {"Time": "29 Jan 2025 03:16:30, PST", The default timestring is expecting a 2 digi... See more...
This isn't so much a question as a comment. I found that time config to be incorrect.  My logs start like this: {"Time": "29 Jan 2025 03:16:30, PST", The default timestring is expecting a 2 digit year.   %d %b %y %H:%M:%S, %Z   Prior to the update, Splunk was stil able to figure out the time but issed the timezone parameter. In other words, if your heavy forwarder has the same timezone as your zScaler logs you would probably be fine.    
I'm planning to upgrade a multi-site IDX & SHC environment to version 9.3 and i have question regarding the automated rolling upgrade feature  https://docs.splunk.com/Documentation/Splunk/9.3.0/Dist... See more...
I'm planning to upgrade a multi-site IDX & SHC environment to version 9.3 and i have question regarding the automated rolling upgrade feature  https://docs.splunk.com/Documentation/Splunk/9.3.0/DistSearch/AutomatedSHCrollingupgrade With the "Automated rolling upgrade of a search head cluster" feature there is the option to execute this on: For cluster upgrade, you can run these operations on any cluster member. For deployer upgrade, you must run these operation on the deployer. For non-clustered upgrade, which means upgrading search heads that are not a part of a search head cluster, you must run these operations on each single search head. Is it also possible to use this feature to upgrade CM, LM, MC, DS & HFW as part of non-clustered upgrade? There is an option to execute on Deployer and License Manager,  so i assume i can also use it on the other (stand-alone) management nodes. Any help would be much appreciated