All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi, I have 2 saved searches that fetch data from datamodel (pivot table) and the result of these savedsearch is storing to a summary index. But 2 days before I found that my saved search query got p... See more...
Hi, I have 2 saved searches that fetch data from datamodel (pivot table) and the result of these savedsearch is storing to a summary index. But 2 days before I found that my saved search query got partially deleted from UI , but it is present in the backend. Is there anyone can help to understand how it is happening. What is the rootcause for this
Never mind, my bad, i wasn't inside the Statistics tab. I now see the events matching the id. Thank you for all your help
Hi @sp , as @bowesmana said, transaction command should be avoided all times that's possible. probably the only condition when it could be used is when you have to use startswith or endswith condit... See more...
Hi @sp , as @bowesmana said, transaction command should be avoided all times that's possible. probably the only condition when it could be used is when you have to use startswith or endswith conditions. Anyway, you can use the OR condition: | transaction startswith=("string1" OR "string2" OR "string3" OR "string4") endswith=("string5" OR "string6") Ciao. Giuseppe
Thank you it worked.
What actually IS the problem here - you are showing me the list of events, but not the statistics tab, which is what the result is. You are searching in verbose mode, so you will see the events, but... See more...
What actually IS the problem here - you are showing me the list of events, but not the statistics tab, which is what the result is. You are searching in verbose mode, so you will see the events, but that is what happens in verbose mode - is there anything wrong with the result?    
Hi @herguzav , ESCu Correlation Search don't need additional fields, but you can customize your Correlation Searches adding fields to the Search and eventually to the Data Model. But anyway, the co... See more...
Hi @herguzav , ESCu Correlation Search don't need additional fields, but you can customize your Correlation Searches adding fields to the Search and eventually to the Data Model. But anyway, the correct approach is the one I described: you must start from the requisites and eventualli define customizations. Ciao. Giuseppe
How to add below logs in Splunk , as while entering to SH were able to find the app(env_d),inside that there are "bin, default, metadata" splunk/en-US/app/env_d/search App: Env_d AppServ... See more...
How to add below logs in Splunk , as while entering to SH were able to find the app(env_d),inside that there are "bin, default, metadata" splunk/en-US/app/env_d/search App: Env_d AppServers: ap8sd010 thru ap8sd019 Logs folders: /app/docker/en1/logs /app/docker/en2/logs /app/docker/en3/logs /app/docker/en4/logs /app/docker/en5/logs
Hi @maede_yavari , each Search Head Cluster has only one Deployer, you cannot add an additional Deployer. You can change the Deployer following the documentation at https://docs.splunk.com/Document... See more...
Hi @maede_yavari , each Search Head Cluster has only one Deployer, you cannot add an additional Deployer. You can change the Deployer following the documentation at https://docs.splunk.com/Documentation/Splunk/9.1.1/DistSearch/SHCarchitecture Ciao. Giuseppe
Thank you so much. it helps me a lot.
Attached please find the modified query and part of the result screen shot
You put the mvexpand in the wrong place - it should be before the where clause. Did this produce any results - there are none shown
Attached please find the query screen shot.  
I would suggest looking at ways of NOT using transaction, as it has limitations. stats can often solve the problem of transaction. Perhaps you can give an example of your data and say what you are t... See more...
I would suggest looking at ways of NOT using transaction, as it has limitations. stats can often solve the problem of transaction. Perhaps you can give an example of your data and say what you are trying to achieve - then the right solution may be clearer. You can use eval statements in starts and endswith, but before you go down that route, let's see what you're trying to get to  
Can you post a screenshot of your query and results - it's not easy to visualise what's going on with just the messages
Hi @richgalloway,   Do you have any documentation that validates the possibility of the fishbucket's size being up to four times larger than the limit specified in the limits.conf file? Any offici... See more...
Hi @richgalloway,   Do you have any documentation that validates the possibility of the fishbucket's size being up to four times larger than the limit specified in the limits.conf file? Any official resources or explanations that could clarify why the fishbucket index might exceed the configured threshold by such a significant margin would be extremely helpful. Concerning TA-nmon: I've noticed that it monitors the server by generating new CSV files every minute, and it deletes the older ones. I suspect that this process could incrementally increase the size of the fishbucket, as it continuously logs the CRCs of newly created log files without removing the CRCs of the old, deleted logs. This situation seems to be evidenced by the _internal log errors related to checksum faild when the log files no longer exist.
There is no relationship between id and duration. It's jut that since there can be multiple id, we need to only accept msg="consumed event" events where the id="XYZ". When i run the below query i g... See more...
There is no relationship between id and duration. It's jut that since there can be multiple id, we need to only accept msg="consumed event" events where the id="XYZ". When i run the below query i get msg="consumed event" for every event, regardless what their id is.
Hi, Is it possible to add a deployer  server to existing search head cluster? Existing search heads are fresh and we have no problem about loss any configuration. By the way the mentioned search h... See more...
Hi, Is it possible to add a deployer  server to existing search head cluster? Existing search heads are fresh and we have no problem about loss any configuration. By the way the mentioned search head cluster is connected to a multisite indexer cluster and each search head's site attribute set to "site0".
I need to run a Splunk search with "transaction" command and I have four pattern variations for the start of the transaction and two pattern variations for the end of that transaction. I read the do... See more...
I need to run a Splunk search with "transaction" command and I have four pattern variations for the start of the transaction and two pattern variations for the end of that transaction. I read the documentation and experimented but still not sure how exactly I should do this. I am operating on complex extensive data so it's not immediately clear whether I am doing this correctly and I need to get it right. I tried the following: 1. Wildcards in startswith and endswith: "endswith=...*..." 2. The syntax "endswith=... OR endswith=...".     -- same for startswith 3. The syntax "endswith=... OR ...". 4. Regular expressions instead of wildcards: .* instead of * Could you suggest the right way of doing this? Thank you!
You need to give more detail about your data - what do you want to occur when there are multiple IDs per event and you want to see averages for XYZ? In that example what does duration mean when there... See more...
You need to give more detail about your data - what do you want to occur when there are multiple IDs per event and you want to see averages for XYZ? In that example what does duration mean when there are two customer ids? If you add the extra mvexpand line after the | stats count values(duration) as duration values(id) as id by event_id | mvexpand id  
Run the query without the last line and you will see what the results are before the average