All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello, Thank you for your explanation. 1) I ran the following search (without scheduled report):   This will push the data from original index to summaryindex   index=originalindex ```---- multip... See more...
Hello, Thank you for your explanation. 1) I ran the following search (without scheduled report):   This will push the data from original index to summaryindex   index=originalindex ```---- multiple searches----- table ID, name, address ``` | summaryindex spool=t uselb=t addtime=t index="summary" file="summary_test_1.stash_new" name="summary_test_1" marker="hostname=\"https://test.com/\",report=\"summary_test_1\""   2)  I  ran index=summary report="summary_test_1"   It gave me the data that contains ID, name, address It appeared that the first search pushed the data to    index=summary report="summary_test_1", thus this command does not only tie to a scheduled report like you mentioned earlier So, what is the difference between summaryindex and collect if they provide the same function? Thanks
We have logs in two different indexes. There is no common field other than the _time . The  timestamp of the events in second index is about 5 seconds further than the events in the first index. How ... See more...
We have logs in two different indexes. There is no common field other than the _time . The  timestamp of the events in second index is about 5 seconds further than the events in the first index. How do in  I need to join these two indexes based on the date and the hour and try to match inside of minute? Thanks,
Hi @richgalloway , I used the above query, it is  showing 0 events 
Use a subsearch.   index=foo | search NAME IN ( [| makeresults | eval search="task1,task2,task3"])  
Hi @Taj.Hassan, I have shared this with the Account teams. I will report back when I hear from them. 
This solution is good. But after selection, if refresh the dashboard you will lose the selection. That is problem i am facing Any help please
Hi @Junaid.Ram, Thanks for asking your question on the Community and for sharing an AppD Docs page. Are the instructions unclear on the Docs page or do you feel something is missing? If so, please ... See more...
Hi @Junaid.Ram, Thanks for asking your question on the Community and for sharing an AppD Docs page. Are the instructions unclear on the Docs page or do you feel something is missing? If so, please let me know so I can share this with the Docs team. 
Hi @Sathish.Perugu, Thanks for coming back and sharing the solution! 
I'm working on building a dashboard for monitoring a system and I would like to have a dropdown input which allows me to switch between different environments. Environments are specified using severa... See more...
I'm working on building a dashboard for monitoring a system and I would like to have a dropdown input which allows me to switch between different environments. Environments are specified using several indices, such as sys-be-dev, sys-be-stage, sys-be-prod. So a query will look something like `namespace::sys-be-prod | search ...` for prod, and the namespace index will change for other environments. I've added an input to my dashboard named NamespaceInput with values like sys-be-dev, sys-be-stage, sys-be-prod. Unfortunately doing `namespace=$NamespaceInput$` and `namespace::$NamespaceInput$` don't work. I've tried various ways of specifying the namespace index using the token but none of them function correctly. It seems like only a hard-coded `namespace::sys-be-prod` sort of specifier works for this type of index. Any tips on how I might make use of a dashboard input in order to switch which index is used in a base query? Note that I'm using the dashboard studio. Perhaps there's a way of using chained queries and making them conditional based on the value of the NamespaceInput token value?   Thank you!
Hi @Amit.Bisht, Thank you so much for following up with a solution to your issue. We love to see that here in the community!
Hi @leted.joey, Let me ask you some clarifying questions. Do you want just your trial account deleted or would you like your entire AppD Account deleted?
@maulikp Thanks. We're able to confirm gateway logs are now flowing through splunk now by searching for pod names that contain the word gateway k8s.pod.name=*gateway*    thank you very much Phu
I am trying to use parameter into the search using IN condition.  Query is retuning results if I put data directly into the search but my dashboard logic require to use parameter .  ........ | ev... See more...
I am trying to use parameter into the search using IN condition.  Query is retuning results if I put data directly into the search but my dashboard logic require to use parameter .  ........ | eval tasks = task1,task2,task3 | search NAME IN (tasks)
Hi there couldn’t be two files with same name on local directory!  You should use  splunk btool authentication list --debug to see how splunk sees those and from which file. r. Ismo
If I recall right you shouldn’t use DEST_KEY= fieldname, just remove that line. Usually splunk write that into _meta field and then it create indexed fields based on that information in indexers.
Please create a new question instead of continue with several years old accepted answer.
Quite possibly there are missing time format on your props.conf. For that reason splunk guess between mm/dd/yyyy and dad/mm/yyyy formats.
Maybe this helps you to find blocking spot? https://conf.splunk.com/files/2019/slides/FN1570.pdf
Attention, this is an AI generated answer and it wrong Moderator   @LearningGuy  Let’s delve into the differences between | summaryindex and | collect in Splunk: | summaryindex: Purpose: | sum... See more...
Attention, this is an AI generated answer and it wrong Moderator   @LearningGuy  Let’s delve into the differences between | summaryindex and | collect in Splunk: | summaryindex: Purpose: | summaryindex is primarily used for creating and managing summary indexes. A summary index is a pre-aggregated index that stores summarized data from your original events. It’s useful for speeding up searches and reducing the load on your search infrastructure. How It Works: When you use | summaryindex, it generates summary data based on existing reports. This means that you can create a summary index only from scheduled reports. Example Usage: If you have a scheduled report that summarizes data, you can pipe it | collect: Purpose: | collect is a versatile command that allows you to push data to a new index. Unlike | summaryindex, it’s not limited to existing reports. How It Works: You can use | collect to send specific data to an index of your choice. This is particularly useful when you want to extract relevant information from your search results and store it in a separate index. Name of the summary index where the events are added. The index must exist before the events are added. The index is not created automatically. Example Usage: Suppose you want to create a custom index called “test_summary” to store specific data. You can use | collect index=test_summary to achieve this. The testmode=false ensures that the data is actually indexed. In summary, while both commands involve indexing data, | summaryindex is tied to scheduled reports, whereas | collect provides more flexibility for pushing data to custom indexes regardless of report schedules. Remember that creating the summary index (whether through | summaryindex or | collect) requires defining the index specifications in indexes.conf beforehand. Happy Splunking! https://docs.splunk.com/Splexicon:Summaryindex  https://docs.splunk.com/Documentation/Splunk/9.2.0/Knowledge/Usesummaryindexing  https://docs.splunk.com/Documentation/Splunk/9.2.0/Knowledge/Managesummaryindexgapsandoverlaps  https://docs.splunk.com/Documentation/SplunkCloud/9.1.2308/SearchReference/Collect 
Great! In that case, you can update the existing values.yaml file with following, and redeploy the helm chart:   In order to enable the gateway, set enabled to true in gateway section and adjust r... See more...
Great! In that case, you can update the existing values.yaml file with following, and redeploy the helm chart:   In order to enable the gateway, set enabled to true in gateway section and adjust replicaCount and other configurations associated with gateway - https://github.com/signalfx/splunk-otel-collector-chart/blob/main/helm-charts/splunk-otel-collector/values.yaml#L1056  Enable the agent logs via - https://github.com/signalfx/splunk-otel-collector-chart/blob/main/helm-charts/splunk-otel-collector/values.yaml#L572 Once you redeploy your helm chart with above changes, You will have gateway running as part of helm chart OTEL agents logs will be collected via daemonset and sent to your backend.