All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @LearningGuy, the approach is correct, but only you can know if the search before the delete command is correct. about the Users menu, probably you're working on a Search Head Cluster, so normal... See more...
Hi @LearningGuy, the approach is correct, but only you can know if the search before the delete command is correct. about the Users menu, probably you're working on a Search Head Cluster, so normal users cannot modify users roles, contact an admin to make this action . Only one additiona information that I forgor in my first answer: the delete command runs a logical deletion, not a physical deletion, in other words, it marks the events as "deleted" and you don't see them more but they remain in the buckets until the bucket is discarded. Ciao. Giuseppe
Hi @Lax , as me and @ITWhisperer said: if you could share some samples of your logs we could be more detailed. The final parte of the search will surely be the tats command, but how to arrive to it... See more...
Hi @Lax , as me and @ITWhisperer said: if you could share some samples of your logs we could be more detailed. The final parte of the search will surely be the tats command, but how to arrive to it depends on how data are in your logs. We need of samples to understand how to separate eventual multiple values in single values to group using stats. Ciao. Giuseppe
As mentioned before see the inputs.conf for the HEC stanza: https://docs.splunk.com/Documentation/Splunk/9.1.1/Admin/Inputsconf#http:_.28HTTP_Event_Collector.29 You can set at the event level (which... See more...
As mentioned before see the inputs.conf for the HEC stanza: https://docs.splunk.com/Documentation/Splunk/9.1.1/Admin/Inputsconf#http:_.28HTTP_Event_Collector.29 You can set at the event level (which is the way that takes precedence) or you could set using connection host.
You only need to report if an event arrived since the last time the search ran.  If an event came in earlier then the previous run of the search would have found it.  So, run every 15 minutes and use... See more...
You only need to report if an event arrived since the last time the search ran.  If an event came in earlier then the previous run of the search would have found it.  So, run every 15 minutes and use earliest=-15m or run once at 7am and use earliest=-12h or something in between.
Have you tried something like | stats sum(storage) by _time  
I was able to resolve the issue. in the _internal index, the following events were generated. I used this to determine which index Splunk wanted to sort the events into, and created it. Search p... See more...
I was able to resolve the issue. in the _internal index, the following events were generated. I used this to determine which index Splunk wanted to sort the events into, and created it. Search peer mysplunkidxs.splunkcloud.com has the following message: Redirected event for unconfigured/disabled/deleted index=intended_index with source="source::1234" host="host::abc" sourcetype="sourcetype::456:efg" into the LastChanceIndex. So far received events from 1 missing index(es).
We are using 2 load balanced HFs for this.
IMO you should use a HF for this. HF will route data based on its contents. So if a log comes with something "trend micro" send to a specific index. It will be something like this: Solved: How do I... See more...
IMO you should use a HF for this. HF will route data based on its contents. So if a log comes with something "trend micro" send to a specific index. It will be something like this: Solved: How do I route data to specific index based on a f... - Splunk Community    
I added a new syslog source using upd port 514. The data is being ingested into "lastchanceindex". How can I find out what index splunk "wants" to put the data into, so that I can create that index? ... See more...
I added a new syslog source using upd port 514. The data is being ingested into "lastchanceindex". How can I find out what index splunk "wants" to put the data into, so that I can create that index? Or how can I specify an index without disrupting the other syslog data sources? We use udp://514 for many different syslog data sources, so specifying all of it to go to one index wouldn't work.
I'm on Splunk Cloud version 9.0.2305.201. Don't I need the quotes in the relative_time function?  If $time.earliest$ is a relative time modifier (e.g., -7d@h), it needs quotes, right?
Hello, I have a peculiar question: Below is sample data: _time data storage name Size of data storage 2023-04-30T00:31:00.000 data_storage_1 10 2023-04-30T00:31:00.000 data_storage_2... See more...
Hello, I have a peculiar question: Below is sample data: _time data storage name Size of data storage 2023-04-30T00:31:00.000 data_storage_1 10 2023-04-30T00:31:00.000 data_storage_2 15 2023-04-30T12:31:00.000 data_storage_1 15 2023-04-30T12:31:00.000 data_storage_2 20 2023-05-01T00:31:00.000 data_storage_1 20 2023-05-01T00:31:00.000 data_storage_2 30 2023-05-01T12:31:00.000 data_storage_1 30 2023-05-01T12:31:00.000 data_storage_2 40 2023-05-02T00:31:00.000 data_storage_1 40 2023-05-02T00:31:00.000 data_storage_2 50 2023-05-02T12:31:00.000 data_storage_1 50 2023-05-02T12:31:00.000 data_storage_2 50   How do i go about getting the the sum of all storages per time frame? Example of output:  Time                   Total Storage 04/30 00:31 -> 25 04/30 12:31 -> 35 05/01 00:31 -> 50 05/01 12:31 -> 70
@richgalloway thank you for your reply. so, what im trying to achieve is, i want to trigger an email alert if there is any event between the time period 7pm to next day 7am. I'm using scheduled alert... See more...
@richgalloway thank you for your reply. so, what im trying to achieve is, i want to trigger an email alert if there is any event between the time period 7pm to next day 7am. I'm using scheduled alerting mechanism. My cron scheduler runs every 15mins starting from 7pm until 7am next day. During this period if it comes across any event record after 7pm and before 7am next day from a search. I want to trigger an email. But im struggling to embed time range for search between 7pm to 7am. 
summary index merges multiple line values into one row, while regular index put the values into a separate lines, so when I used stats values command on summary idnex to group by ip, the merged val... See more...
summary index merges multiple line values into one row, while regular index put the values into a separate lines, so when I used stats values command on summary idnex to group by ip, the merged values are not unique. Questions: 1) How do I make summary index put multiple values into separate lines like on a regular index?  2) When I use stats values command, should it return unique values?    Thank you so much for your help See below example:   1a) Search using regular index index=regular_index | table company, ip company ip companyA companyA 1.1.1.1 companyB companyB companyB 1.1.1.2 1b) Search regular index after grouped with stats values index=regular_index | stats values(company) by ip | table company, ip   company ip companyA 1.1.1.1 companyB 1.1.1.2 2a) Search using summary index index=summary report=test_ip | table company, ip company ip companyA companyA 1.1.1.1 companyB companyB companyB 1.1.1.2 2b) Search summary index after grouped with stats values index=regular_index | stats values(company) by ip | table company, ip   company ip companyA companyA 1.1.1.1 companyB companyB companyB 1.1.1.2  
If the search runs every 15 minutes then there's little reason to search more than 20 minute back.  So, earliest=-20m latest=now.  What is the use case?
How to schedule search between 7pm to 7am and alert if and only if there is an event recorded between 7pm to 7am? my cron expression is */15 19-23,0-6 * * *. What should be the earliest and latest va... See more...
How to schedule search between 7pm to 7am and alert if and only if there is an event recorded between 7pm to 7am? my cron expression is */15 19-23,0-6 * * *. What should be the earliest and latest value?
Check out the mvstats for Splunk app in splunkbase (https://splunkbase.splunk.com/app/5198).
Still a little confused. I have tried both the $host and $name token and neither work. My search sends an alert when any host reach a certain level of CPU utilization. At times there are multiple hos... See more...
Still a little confused. I have tried both the $host and $name token and neither work. My search sends an alert when any host reach a certain level of CPU utilization. At times there are multiple host that show in the search. when adding the token in the subject line it appears in the email sent as $host or $name. An email is triggered for each host but the goal is to have the host name and value in the subject line.
i am still recieving the same issues on 9.1.1 forwarder and splunk enterprise as of 1406 EST 10/27/2023. are you as well?
The default sourcetype for data that is generated for a summary index is stash.  So the fact that it has a different sourcetype (specific_st) and you found a configuration stanza for it seems to impl... See more...
The default sourcetype for data that is generated for a summary index is stash.  So the fact that it has a different sourcetype (specific_st) and you found a configuration stanza for it seems to imply it is not summary-index data.   Keep in mind that the Splunk docs refer a lot to a "summary index" - but you can have real-time event data and summary index data in the same index.  BUT - it is a best practice to keep them separate because raw data typically has a pattern of ingestion/search that is wildly different from summary data, so long term you tune those indexes differently.  Thus why the docs often refer to it as a separate index. But based on what you've provided so far it sounds like someone configured that sourcetype to go into what others think is a "summary data only" index.  What's the source values for those events?  Are they Splunk servers, or are they part of an application/webserver/database pool of servers?
I have one to many multivalue fields with exact size and I would like to do the average by index. ex: multivalue field1 1 2 3 multivalue field2  3 6 7 Result: 2 4 5